The Business & Technology Network
Helping Business Interpret and Use Technology
«  

May

  »
S M T W T F S
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 

82% of nonprofits use AI: Almost none are regulating it

DATE POSTED:April 8, 2025
 Almost none are regulating it

The nonprofit sector is embracing artificial intelligence faster than it is ready for. More than half of nonprofits now use AI tools in some form—ChatGPT, automation systems, predictive analytics—but less than 10 percent have written policies on how that AI should be used. That’s not just a procedural oversight.

It’s a structural vulnerability. These organizations, many of which serve historically marginalized communities, are stepping into a high-stakes technological landscape with few ethical guardrails and even fewer internal frameworks to guide them. This gap between adoption and governance poses real risks—algorithmic bias, privacy breaches, and unintended harm—particularly when off-the-shelf tools are deployed without deep understanding or oversight. The rush to efficiency may unintentionally erode trust, compromise values, and expose nonprofits to reputational and legal fallout.

Efficiency now, regret later

The numbers tell a striking story. According to BDO’s 2024 Nonprofit Benchmarking Survey, 82 percent of U.S. nonprofits now report using AI. Of those, the majority are applying it to internal operations: 44 percent are using AI for financial tasks like budgeting and payment automation, and 36 percent are applying it to program optimization and impact assessment. The focus, in other words, is administrative efficiency—not mission delivery.

That’s consistent with the Center for Effective Philanthropy’s 2024 State of Nonprofits survey, which also found that productivity gains were the most common reason for AI use. But that same survey reveals the ethical lag: fewer than one in ten organizations have formal policies in place. And the organizations that do use AI are often working with limited infrastructure, little in-house expertise, and constrained budgets that prevent them from building customized, domain-aware systems. Instead, they lean on commercial tools not designed for their unique contexts, increasing the likelihood of bias, misuse, or mission misalignment.

At a time when trust is central to nonprofit credibility, this governance vacuum is alarming. AI is not neutral. It reflects, magnifies, and operationalizes the data it is trained on—and that data is often riddled with historical inequities. Without policies to guide use, nonprofits risk reinforcing the very structural inequalities they aim to dismantle. They also risk falling short of their own values. As Addie Achan, director of AI programs at Fast Forward, put it: “It’s better for an organization to define the rules and expectations around that use rather than have people use it and inadvertently cause more harm.” In this context, “harm” could mean anything from discriminatory decision-making in service provision to unintentional leaks of sensitive beneficiary data. The need for ethical AI policies isn’t a theoretical concern—it’s a practical one.

The cost of caution and the price of action

According to BDO’s survey points to a trifecta of resistance: lack of knowledge, insufficient infrastructure, and funding constraints. But about one-third of respondents also cited employee resistance and ethical concerns. While managers fear risk, employees may fear replacement. The skepticism, then, is both practical and existential. And it plays out unevenly. Most AI deployments are limited to back-office functions, where the tech can quietly improve accuracy and efficiency. But the more transformative applications—AI-powered energy tracking, real-time data synthesis for global education programs—remain largely aspirational. These mission-aligned uses demand both financial muscle and ethical clarity. Right now, most nonprofits have one or the other. Few have both.

The financial balancing act

Ironically, the sector’s financial position is more stable than it has been in years. According to BDO, 52 percent of nonprofits saw revenue growth in the past fiscal year, up from 44 percent in 2023. Meanwhile, 62 percent now hold seven or more months of operating reserves—the strongest cushion since 2018. That’s a significant shift from the lean years of the pandemic. And it’s giving leaders the confidence to consider more ambitious operational shifts.

Nearly three-quarters of nonprofits say they plan to expand or shift the scope of their missions in the next 12 months. But caution remains the dominant financial posture. Most organizations are spending less across the board in 2024 compared to 2023, especially in advocacy, fundraising, and donor relations. The exceptions are new program development and talent acquisition—areas that saw modest spending increases. In other words, nonprofits are saving, hiring, and testing new directions, but they’re doing so with one eye on the political calendar and the other on macroeconomic instability.

A policy vacuum with real consequences

So where does this leave the sector? It’s in a moment of quiet contradiction. On one hand, nonprofits are building reserves, hiring talent, and expanding missions—clear signs of institutional confidence. On the other, they’re rapidly adopting a powerful, unpredictable technology without the governance structures to manage it. The sector is entering the AI era in the same way it entered the digital era—through improvisation and adaptation rather than strategic design. That may be fine for a while. But without policies to ensure transparency, accountability, and alignment with mission, the risks will only grow. The tools may be new, but the ethical dilemmas—who benefits, who’s left out, and who decides—are old and unresolved.

Verified Market Research: AI agents set to hit $51.58B by 2032

What needs to happen next

Creating ethical AI policies for nonprofits isn’t about slowing innovation; it’s about directing it. That means establishing guidelines that reflect each organization’s mission and values, investing in internal education on how AI systems work, and implementing oversight processes for evaluating both benefits and harms. Policies should clarify not just what AI can be used for, but what it should not be used for. They should identify decision points where human review is mandatory, outline data privacy expectations, and provide procedures for redress if harm occurs.

Nonprofits have a narrow window to lead by example. They can show that it’s possible to use AI not just efficiently, but ethically.

Featured image credit