GoodGeist
A podcast on sustainability, hosted by Damla Özlüer and Steve Connor, brought to you by the DNS Network. Looking at sustainability issues, communications, and featuring global guests from a wide variety of sectors such as business, NGOs and government.
GoodGeist
AI Shaping the Future, with Sam Fankuchen
In this episode we sit down with Golden’s founder and CEO, Sam Fankuchen, to explore how AI, ethics, and interoperability can transform volunteering from good intentions into measurable, scalable impact. Sam’s story begins with a near loss on 9/11, a moment that reshaped his purpose and his focus on removing friction so more people can contribute meaningfully and safely.
We look at the state of AI and how to create an ethical framework for its use, with guardrails and evals that keep outputs accountable. We also discover that if you want to stay ahead in an AI-driven world, you need to absorb the very latest innovations on a daily basis.
Sam explains why agents are changing the resource equation for nonprofits and public agencies—automating research, fundraising, policy mapping, and programme audits while keeping humans in the loop. The result is a “shadow organisation” that scales capacity without sacrificing trust.
If you lead a team or support a cause, this conversation offers a playbook for ethical AI adoption, cross-sector collaboration, and systems thinking that actually moves the needle. Have a listen to our latest GoodGeist!
Follow GoodGeist for more episodes on sustainability, communications and how creativity can help make the world a better place.
Good guys a podcast series on sustainability hosted by Damla Ozler and Steve Connor. Brought to you by the DNS Network.
SPEAKER_03:I'm counting us in. Three, two, one. We're rolling. Hello, hello, everyone. You are listening to Good Guys, the message on sustainability, which is brought to you by the DNS Network, the global network of agencies dedicated to making the world a better place. This is Damla from Mira Agency, Istanbul, and this is Steve from Creative Concern in Manchester.
SPEAKER_01:This podcast series explores global sustainability issues, how they communicate, DAMla. It's just to put Sam at ease. That's why I did that. Honest. And this is Steve from Creative Concern in Manchester. This podcast series explores global sustainability issues, how they communicated what creativity can do to make positive change happen.
SPEAKER_03:So in this episode, we're going to talk to Sam van Kuchen. Is that right, Sam?
SPEAKER_00:That's right if you're speaking proper German, but you can also say it in an anglicized version as you please.
SPEAKER_03:Oh God, I nailed it. I'm so happy. So Sam is the founder and CEO of Golden. And Golden is a user-centered software ecosystem to make volunteering and donating more approachable, meaningful, and productive. Today, Golden is both the technology system of record for more than 50,000 organizations on six continents and the category-defining mobile apps for volunteering. The social enterprise has been named a Fortune Change the World Company, fast company, world-changing idea twice, multiple time Webby and Anthem Award Honore and Meta Social Good App of the Year. Oh my god, amazing.
SPEAKER_01:You needed to stop for breath in the midst of Sam's resume, which I am now going to continue auf Deutsch. No, I'm not going to try and do it in German. But honestly, Sam, you've got so many impressive titles. You were editor at the technology section of Engage, I think, for a while, which just sounds amazing.
SPEAKER_00:Still still am.
SPEAKER_01:Still are amazing. So we we can get into all of that in a minute. You've been involved in the Gates Greater Giving Summit, Giving Tuesday, first undergraduate student to major in social entrepreneurship and innovation at Stanford, which is incredible. And you've delectured loads of places. I won't go through the full list, but Harvard Business School, University of Pennsylvania, Southern California. It's an impressive list, Sam. So thank you so, so much for taking the time to talk to Dhamma and myself.
SPEAKER_00:It's a pleasure to be here, and this work is incredibly meaningful. So happy to dig in.
SPEAKER_03:It's great to have you with us today, Sam. And it seems that your whole story is about making change happen. But before getting deep into your work, we would like to hear about your own story. I mean, how does an Ivy League student get involved in social change?
SPEAKER_00:I think the story begins long before that. For me, I had a very close call and I'll share a little bit of this story, but would also say that on some other podcasts and publications, I've been lucky enough to go into real depth with the story. So for anybody who's interested, feel free to poke around. I'm sure you can find something with the full history. But the short version is I had a life-altering moment when I was in high school when I believed that I lost my entire immediate family in the 9-11 attacks. And I was under that impression for several days until I learned later that my family had gone standby on a different flight than one of the flights that was hijacked. But that event was very close to home for me. Besides that personal story, there were a lot of other connections personally that I experienced. And the result of walking through the pattern of emotions that somebody goes through at whatever pace they go through it, when you're in a situation like that, fundamentally changed what I considered my own purpose in the world to be and what I observed in people around me setting up their lives to accomplish. And years later, when I really fully had a chance to process all of that, I started to become very interested in public service as a means to discover what was interesting in the world, what I hadn't seen yet, where I could fit in and maybe help improve quality of life for people or myself. And I went through a journey just like everybody does when they make commitment to follow an intention to go and pursue anything. And in my experience as somebody without a lot of dedicated background or credentials or specific contacts, that the process of figuring out where to start and how to have a meaningful set of interactions was just too hard. And it occurred to me that anybody who is less motivated than I was would be encountering similar friction in their own journey. And I think more importantly, it's not just that things were hard for the individual. It's that if you don't open the door and show people where they can get started, you lose out on a lifetime of productivity and relationships and observations and so many other things that are essential to improving any kind of situation. And it became very clear to me that the people I encountered along the way who were recognized as experts were in fact experts at a very narrow subject matter body of expertise. And solving the kinds of challenges that I find interesting and that the people we work with find interesting requires total conviction and resolution around pursuing something, but it also requires understanding where your limitations are and who you need to partner with. And for so long, the social impact sector was filled with people who were dyed-in-the-wheel operators of a certain belief system. And not to make general generalizations about what that belief system is, every pocket of every industry has their own character profile like that. But the problem is that solving problems that people consider to be intractable requires interdisciplinary collaboration. And that means you have to be willing to go to the places you haven't spent time. You have to be willing to collaborate with people with different points of view. You have to be willing to accept that technology is inseparable from every other subject at this point in time. And you have to be comfortable with the fact that success looks different for different people, different populations. And you have to be accepting of, you know, for the purposes of, for example, this podcast that has an orientation around sustainability, sustainability can mean a lot of different things to different people. But for me, fundamentally, you need to have a system that works. It's a system scale concept. And if you're investing resources and time and money and effort into something that is not going to yield compounding returns, then it's not going to be sustainable in the long run. And my academic journey in in school, both undergrad and grad school, but also my professional journey as an entrepreneur or as somebody just helping other people with their programs, it became really clear to me that we needed to build better systems, that we didn't have the kinds of systems that could carry us to a world that's a better version of ourselves. And for that reason, I have spent pretty much every waking minute of the last 20 years thinking about who I can work with and what needs remain and what progress we can make on these things. So that's a little background. Feel free to drive in any direction that's helpful for you.
SPEAKER_01:Well, I'm gonna um I remember Sam very early in my career. I was working for an NGO and it was kind of in my first work on the environment. And I remember talking to the the head of campaigns as NGO, and I was busy doing sort of kind of uh research and writing and campaigning and doing some photography. And I said to her, I said, do you know, I feel like too much of a generalist. I just I'm doing a little bit of everything and I want to, should I just become really good at one thing? And she said, no, no, being a little bit good at lots of things makes you far more useful. So I love the kind of that interdisciplinarity of of what you're doing. I think, shall we go? So let's do your into your interconnectivity of all those different disciplines. Let's do that at a big scale and talk about your work bringing tech, innovation, and social change together at a at a very big scale. And you've you've worked on the theoret theoretical grounding of of what you know, tech for good, and you've collaborated with people on policy issues, on AI, data privacy, disaster relief, obviously entrepreneurship, because we touched on that already. And then that work itself has been presented in numerous sort of platforms and fora like the United Nations. So you are the ideal person to give us a check, a global check, at what is a horrendously early time in the morning, your time, by the way. So we need to be fair on you. Where is the world at at the moment on the nexus of technological transformation and the ethical journey that all of our societies are on? What's your summation of where we are?
SPEAKER_00:So, my sum this week, because things in in this world, in the era of AI, things are now changing at a much faster velocity than they ever have. And that's something all of us need to get comfortable with, including myself in certain ways. But my way of getting comfortable with it is I begin my day with two hours of reading about AI news every single day. And I think if you are at the forefront of applying AI in ways that people have not applied it before, that understanding what's happening, what other learnings have happened in the last, you know, day, week is essential to understanding where you spend your time with AI and making intelligent investments for yourself and your organization. And even if you're not at the forefront of AI, but you're starting to realize that it's pervasive and there is no going back in time to a world without AI, you should probably be spending minimum like 15 minutes, 30 minutes a day just getting more comfortable, which can mean opening up an LL like, you know, and using any kind of consumer chat AI. So Chat GPT, Claude, Gemini, it doesn't matter. You can probably even play with all of them, GROC and, you know, and just see what they're they're good for. And I think it's very important the way we as an organization internally require all of our team to first think how AI would do something before they go and do it, whether they're doing it with AI and they're in the loop or they're doing it without AI, or someday AI will do it. I would really like to encourage everybody everywhere to have an understanding that AI exists, the same way when the internet started to become available to the mass market and electricity and things like that. It is a truly watershed moment that will fundamentally re-architect everything. And frankly, it will close a lot of gaps for us that we know deserve to be caught closed in the social impact sector. Things that are large-scale problems that have to do with resource allocation. I think of hunger and food waste as something that is eminently solvable, for example, but there are many others. And AI throughout the entire system will help us close a whole bunch of different gaps that will result in fewer people being hungry and there being less waste of food. But if we depart from that for a quick second and go back to your original question about where we are in the evolution of ethics and AI adoption, I would also say that you should, if you work on behalf of an organization, sit down and define your own ethical framework of what you think constitutes who you should be as an organization, what you would like to accomplish using AI, what concerns you have and what lines you don't want to cross. And then you can use that to build out the frameworks that you need to make sure you're using AI appropriately for every organization that's different. But you should have what are called guardrails and evals, which are your process for reviewing the success of the outputs of your work with AI, which should be derived from your own ethical framework. And then in Europe, where you all are, there is AI regulation. And there are in North America NIST standards and other things that folks can use for points of reference, and your own framework should map to them. And that's a pretty healthy place to start with the basics. But what I would also say is we need to get way past and not spend any time talking about whether or not we should move forward with AI because of the ethics, because that's just simply not a conversation that's relevant anymore. Yeah, the strange, the strange thing to me is I open this rambling statement by saying this week how I feel. How I feel this week is I my assessment based on a lot of conversations I have about these topics is that by and large, in the NGO and nonprofit sector, most leaders still are at a stage where they're not ready to adopt AI or endorse it because they have ethical concerns about the consequences. And to me, that is a natural set of feelings to have, but you have to force through it because the world has already left you behind, if that's if that's how you you feel about it. And that I mean, that may be difficult to grapple with, but you can get through it just by reading and by starting with your own manifesto about what AI can and and should not do for your org, and then just starting to look for the right opportunities. Not every opportunity. You don't want to put the wrong tool on the job. And a lot of these tools are in their early days, but the results of using them appropriately are so astounding that it would be irresponsible not to use AI in the right settings. But that also means you have to understand who your stakeholders are, you have to set fair expectations with them, you have to have policies that are enforceable, you have to take accountability for the moments when you don't get optimal performance. But those are all manageable the same way every other management decision in your org is.
SPEAKER_01:So, Sam, I'm sorry, Damla, I know it's your turn, but I want to just come back in just very briefly because there's a couple of things there, Sam. I'd just love to, as somebody who is dedicating that time each morning, I love the idea. I might I might try and do 30 minutes rather than two hours. I'm very admiring of that. But I remember when um when Twitter first launched, and I had friends who really struggled with it as a platform because of the volume. And they they'd open it up and it would be constantly refreshing with these new tweets. And and they thought it was somehow, they looked at it like it was an email inbox and it could somehow be emptied or completely read. And it was like, it's just a stream, it's just a stream, it will keep going. You can't do this. And I so I want to ask you about speed and volume, because you mentioned there that change is just accelerating, accelerating. And um, I don't know whether you've come across the the brilliant book, Information Anxiety, about the sheer volume of data that is bit that human beings are subjected to. I read one of the examples is that nowadays a single edition of the New York Times has got more data in it than a person would have experienced or absorbed in their entire lifetime in the Elizabethan age, which is incredible, isn't it? So, how how can we as humans deal with the speed of change that you just talked about and the volume of data, or do we just need to kind of relax and let it go?
SPEAKER_00:That's a very thought-provoking question. And I'd like to make sure that I just give a clear and basic answer rather than a complicated one. So, very practically speaking, I do think, like I said before, 15 minutes to 30 minutes a day would be very reasonable and very rewarding for most people. If you invest that amount of time, you will certainly get orders of magnitude of productivity or reward in your life back, let alone if you're responsible for working with others in a in a complex organization. The places I would go to learn about that are figuring out generally, and then also specifically for your interests, which daily newsletters or weekly newsletters cover AI and the material that you're interested in, so that you can just get, you know, a docket of some headlines. And if you want to go deeper, you can go deeper, but you can start to see how pervasive and and wide the lens is for different topics, how quickly things change. Who has done incredibly creative things with very little resource, which is one of the most astounding parts about AI is that if you can just write an interesting prompt, or if you can just observe a setting nobody else has looked at a certain way, you can start to make incredible progress. And it's just easier to start seeing others around and what they're doing and observing how approachable those advancements were. I would also use the common LLM suites that exist, or like AI tools that exist, so that you can see how quickly the versions evolve and what capabilities and what results you get from these tools. So, for example, earlier I mentioned ChatGPT, Claude, Grok, Jam and I, but perplexity is also very widely used and is very helpful for research, very quick research. You could use deep research in OpenAI's tools and you could get a different format of output that maybe is more presentable. But if you just want to shortcut an answer to some contemporary question, thing that is something that has relevance to today's news, you can go to a place like that and you can get a really good answer. And just getting used to incorporating those tools in your everyday life does count toward that bucket of research I was mentioning. Also, looking at how people have set up agents to do things is really important. In our organization, we're probably at a point where we have as many agents as we do human beings on our teams. And we will certainly be far more agents on our team and sort of like a shadow organization than we will have real human beings. But we will probably have human beings in the loop for the entire foreseeable future for all the processes we're doing. And when you can see what an agent does, yeah, it starts to change a lot of the notions that that those of us who've spent years in the social impact sector have had about resourcing. For example, most people I know, if they were surveyed, would say, I don't have enough resources to accomplish my mission and vision. But this is the first moment in history where all of that can change. Because if you can put your finger on what kind of resources you need, you can then build a computer or a set of computers to go and do all of the things that you need done to the degree that you can specify what they are to a certain point, but far past the point that you think it is. And that is truly incredible. So I I would encourage everybody to just get comfortable with what AI can do so that you can think about when the time is right, every time that there's an occasion for an agent. And an easy way to think about an agent is if I had a human being with a set of responsibilities, a set of skills, set of expectations, set of people they interacted with, what would they do? What would they need to know? What kind of results would I expect from them? And then you can just put that, you know, as your definition for an agent. And then you can all of a sudden have a pre-programmed computer, set of computers that that go and do those things for you. And the uses for those really touch every corner from market research to fundraising to legal to accounting to engineering to security and so much more. And the knowledge body and the ability to come up to speed on very extensive areas of information is also incredible. So, for example, in law, if you have to go do a bunch of research on case law, that's going to take a lot of time and you need to have a lot of context to interpret what you're seeing. But an AI can do that in seconds. Not saying everybody needs to go and do case law, but maybe you need to look through a bunch of anonymized records to look for patterns in populations that you serve. Maybe you need to explore regulatory frameworks and the places you operate to come up with one policy that works everywhere. Maybe you need to double check the work you did to figure out where a process failed. These are all things that could take very highly skilled human beings months, and instead you can do it in minutes. And what's so exciting about it, Damla, is these are all the excuses that that so many of our colleagues and ourselves have used over the years. Well, it's just a bridge too far to do this next stretch of work. And now we can very specifically identify the breaking points. It's not just that it's a lot of work, we can take it to the limit and then wait until the time's right to go farther.
SPEAKER_03:Well, Sam, I I really want to go more thought-provoking ways, okay, in a lot of things like creativity and connections and AI's capabilities and what we can do better. Maybe universal income because of the new technology going, also the regulations. I have a lot of things to talk to, but I'm so sorry that we have limited time. Maybe only on AI, we have to talk again for hours of time, maybe.
SPEAKER_01:No, you can't do that, Dum. You just universal basic income. We'd love to do that, wouldn't we, Sam? But no, sorry, Dumbledore. I'm distracting you.
SPEAKER_00:You go I think that's an interesting topic. I share a concern that there will be people who will be very detached from the progress of the world. And the question is, how do we help people in a situation like that feel human? I think it's premature to describe or you know, to prescribe a solution like universal basic income. We haven't seen that moment occur yet. When we get to a moment like that, we will have all kinds of different tools to use. And I don't think a blunt instrument designed in an era before we had visibility into the needs is the correct one. Just as in the same way, what we learned from disaster relief is even though you need infrastructure, sometimes at the federal level or a state level, territorial level, the basic needs are always at the human and community level. And so you need people who have a more approximate understanding of what really needs addressing and what tools there are to address those needs than broad stroke solutions. But there also needs to be infrastructure. And we also need to realize that there are consequences for this rate of progress.
SPEAKER_03:See, Sam, I just provoked you. Okay, I have so many more provoking questions, but I can't because we have limited time on that. So when we talk about the good tech in any way, Golden is a very distinctive example. And doing good effortless was the foundation you built on. Can you tell us how the idea was shaped? I mean, what was the problematic you saw in the sector? And what was your intervention method? So because this is what happened. You saw a problem and you intervened.
SPEAKER_00:Yeah. From the consumer perspective, it was too difficult for people to discover and participate in acts of service. And so we created a system that structured inventory of what was available and personalized it to everybody and then automated the process of engaging with it in a way that was fun and compliant and safe and so many other things. So much that we created a category of software where there wasn't one, but made it feel very natural and approachable. For the organizer, we recognized that time is spent in a lot of different areas, sometimes productively, but not always toward accomplishing the mission. And so we wanted to give anyone at any scale in any sector a real-time understanding of their productivity, who they're reaching, who they're engaging, the outputs, the retention rates, the volunteer to donor conversion rates, the progress toward their missions, et cetera. And we wanted to provide that in a way that anyone, whether you have no background with computers or whether you have a PhD in data science and you're operating an army of AI agents, we want to give you the right set of tools so that you can answer questions with clarity and conviction. And you can find out more at goldenvolunteer.com. You can email us at support at goldenvolunteer.com. And if you have particular needs that require much more technology than the sort of things we've been talking about today, you can feel free to share your needs with us and we can either hopefully take you there or point you in the right direction.
SPEAKER_01:Well, I'm very excited about that.
SPEAKER_03:He makes a sound so that that wasn't okay.
SPEAKER_01:So he's kind of way ahead of it. It's not fair to say.
SPEAKER_00:I'm honored to be here. And I in particular, I have a lot of interests in the EU market. Yeah. Because it's such a patchwork of different needs and and and different appetites. And at the same time, there is an interest and there are resources to adopt technology. And so, you know, if you're in in the UK or anywhere in the EU, we're certainly interested in learning more. And Golden is GDPR compliant, UK anti-bribery act compliant, et cetera. We we do have servers, you know, across Europe, and we're excited to partner with you.
SPEAKER_01:Well, I I've got a use case straight away, Sam. I might be in touch separately because we've got a I'm trying to mobilize many thousands of volunteers to help nature recover at scale across our home city of Greater Manchester. So and we're trying to do that literally on the start of our journey. So maybe this is perfect timing. But a almost final question. Damn is going to come in with the final question, but I wanted to ask you about something very human, which is the collaboration that happens between business, NGOs, the charitable sector, tech solutions. And and you emphasize that very much as part of your pitch. So how what is the what is the best recipe for creating a great collaboration between different sectors and different organizations and delivering what you do in terms of social impact?
SPEAKER_00:Without technology, the way to do it is to understand the scope of your services, where you excel with the resources that you have, and then where that scope starts to taper and where there are similar organizations in the sense that they serve similar populations in slightly different ways, and to open a dialogue, a collaborative dialogue with those people. So, for example, to use concrete examples, in a disaster setting, if somebody's house burns down, they are maybe going to have some needs for food, some needs for shelter, some needs for childcare, et cetera. Same person, same setting, but maybe not needs that you can totally capture and resolve on your own. In that case, you want to be able to identify where to send somebody and send them quickly while you still have a chance of addressing the need. That's a very focused example, but there are plenty of other examples in every other setting, whether you're talking about climate or you're talking about migration of people or anything like that. From a technology perspective, it means having systems that are interoperable. And the vast majority of software in the NGO sector have been siloed industry specific tools, and those tools are going to have limits. And instead, you should be looking at things where data, reporting, access controls can be shared beyond your organization in the appropriate settings. Because that way you don't have redundant case files, you don't have data currency issues, you can just allow people to interoperate in the ways that are natural for the work they're doing. And a big part of our architecture was making sure that everybody can use Golden, whether you're using it for free or you're paying us a lot of money for an enterprise scale deployment. And so that you can work with operators in any sector. We support governments at every level, healthcare institutions, educational institutions, companies, disaster relief organizers. You know, it ranges. And that's very important because in the real world, nobody checks your tax status before you just agree to like get the job done. It's important to know where people stand if they have a tax status, you know, if if they're on any watch lists. We have tools to make sure you can do that. But it's not important to impose artificial distinctions when it's time to get the job done. It's important to get the resources allocated optimally.
SPEAKER_03:Wow. Well, I mean, we have so many gazillions of questions that we want to ask, but unfortunately, the time is up.
SPEAKER_01:So our We've run over really badly, actually, Damla. We've done really badly at keeping the time.
SPEAKER_03:This is too free. I wanted to listen more from Sam. I know. Final question. Our network is ironically called do not smile, because we need to make sustainability a subject that brings happiness to people. So what object, place, or person always makes you smile?
SPEAKER_00:My children and my wife always make me smile. I think it's very easy to get absorbed in the work that we do and always try and go one more extra mile. And at the end of the day or throughout the day, I think it's really important to reflect and appreciate the gifts that you have. And for me, those are the gifts that I value.
SPEAKER_01:Beautiful. That is beautiful. Sam, you head off to your next engagement because you're going to be super fast. He's probably already done two hours of reading about AI today, Dumla. Sam, it's been brilliant talking to you. You take care. You too. Bye. Thank you.
SPEAKER_03:So thanks to everyone who has listened to our Goodgeist podcast brought to you by the Do Not Smile Network of Agencies.
SPEAKER_01:And make sure you listen to future episodes. We'll be talking to more amazing people about how we can work together, even with our agents, Dumla, to create a more sustainable future. So see you all soon.
SPEAKER_02:Bye. Goodgeist. A podcast series on sustainability.