Fear of AI–What It Says About Us
What if the biggest fundraising opportunity of the next decade is the very thing most people are afraid of?
In the media, AI is almost always cast as the villain. The job stealer, the bias machine, the privacy threat. But in the nonprofit sector, where every hour saved can be an hour spent with a donor, the question isn’t “Should we be afraid?”
It’s “Can we afford not to use it?”
As the new book, Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good, by Nathan Chappell and Scott Rosenkrans points out, the sector is at a crossroads. There is rising need, shrinking donor participation, and burnout levels so high that nearly three-quarters of nonprofit employees are considering leaving. This isn’t a nice-to-have moment for innovation; it’s a survival moment.
Where the Fear Comes From
Here’s the psychology: fear of AI comes from two deeply human places:
1. Uncertainty. Technology has always disrupted the way we work and live, from the printing press to the internet. But with AI, the pace feels faster, and the scope feels bigger. We’re not just talking about a new tool; we’re talking about something that could transform how we think, make decisions, and connect. And when the future feels unpredictable, our brains lean toward worst-case scenarios. It’s a survival instinct.
2. Projection. Humans are natural storytellers, especially when the story is scary — a product of our “negativity bias,” the hardwired tendency to spot threats faster than opportunities. That bias kept our ancestors alive when a rustle in the grass might mean a predator. Today, the “rustle” might be an AI that can write, analyze, and brainstorm. We don’t just see a tool; we imagine intentions, plots, and “long games,” often straight out of The Terminator. In reality, AI has no motives. The “long game” is whatever we decide to use it for. But negativity bias makes it harder to separate possibility from paranoia.
The Nonprofit Opportunity
AI isn’t going to wake up one morning and launch your annual campaign. But in the hands of a smart fundraiser, it can be a force multiplier:
Draft five versions of an appeal before your second cup of coffee. You spend your energy refining tone and personalizing, not wrestling with a blank page.
Identify donor patterns you wouldn’t spot with Excel alone. One small arts nonprofit used AI to analyze years of donor and ticketing data. It found that nearly half of its lapsed donors had attended a single sold-out event years earlier, a detail no one had noticed. That insight led to a targeted re-engagement campaign that paid for itself within a month.
Build a personalized stewardship series in hours instead of weeks. Imagine instantly tailoring thank-yous, updates, and invitations based on each donor’s interests and history. Without adding another late night at the office (or on Zoom).
As Nonprofit AI points out, AI adoption increases speed, improves decision-making, and reduces burnout by automating routine tasks. For nonprofits, this means increased capacity for strategy, innovation, and deeper donor relationships. The exact things that drive mission impact.
Still, I Hear the Questions
Won’t donors feel duped if they know AI helped write something?
Only if it feels like it. If your appeals read like they came from a robot, that’s not AI’s fault, that’s the prompt and the editing. The human touch still matters most.
What about bias or bad data?
Absolutely — garbage in, garbage out. Treat AI like a super-fast intern: helpful, but requires supervision. You fact-check, reframe, and ensure the voice stays true to your mission.
Couldn’t this widen the gap between large and small nonprofits?
It could… unless smaller organizations treat this as their chance to leapfrog. You don’t need a big staff to compete if you have the right tools and the skill to use them.
Why Ignoring AI Is Riskier
The real danger isn’t that AI will replace fundraisers. It’s that fundraisers will ignore it until the organizations who do embrace it have pulled so far ahead that catching up becomes impossible.
As Chappell and Rosenkrans put it, the future of the nonprofit sector will be shaped by those who adopt AI swiftly and responsibly. The gap won’t just be about efficiency, it will be about survival.
Note: I submitted the final draft of this article to ChatGPT to get its opinion. It responded, "Well reasoned. Balanced. Worth publishing.”
But then again… that’s exactly what we’d expect Skynet to say before it was ready to launch the missiles, isn’t it?