Humans first, automation second - Not-for-profits and the AI challenge
Catherine Arrow, Global Public Relations Practitioner of the Year and Executive Director of the PR Knowledge Hub.
This guest blog was written by Catherine Arrow — Global Public Relations Practitioner 2025 and Executive Director of the PR Knowledge Hub. Catherine has been researching and teaching on AI, ethics and communication for many years, and brings deep insight into what purpose-driven organisations need to understand as AI becomes embedded in our work.
Across the world, most not-for-profits and NGOs are stretched to the limit. Geopolitical pressures, economic constraints and the roll back of DEI commitments have added weight to organisations already carrying more than their share. People are working extraordinarily hard to improve the lives of others yet their resources shrink as demands grow. It is little wonder that many now look to artificial intelligence for help.
The appeal is obvious. AI can take on some of the repetitive, time-hungry work that consumes the day. It can produce a first draft of copy, summarise long documents, reshape content for different channels and offer a degree of personalisation previously out of reach. Used well, these systems create the headspace communicators need to focus on strategy, relationships and the people they serve.
AI also brings capabilities that once required big budgets. Smaller organisations can run rapid environmental scans, track sentiment, translate material, check accessibility and explore scenario options without commissioning large external projects. They can understand what is happening around them more quickly, listen to their communities in more detail and respond earlier and more thoughtfully. For many, this feels like the long-awaited opportunity to move beyond constant crisis-mode.
Yet alongside the promise sit considerable risks, and organisations must confront them before they adopt AI tools. As always, the starting point is purpose and that means understanding what you are using the system for, what problem it solves and who will genuinely be assisted rather than harmed. Clear thinking at the outset prevents far greater ethical difficulties later.
From there, organisations need to draw their lines in the digital sand. What will you never use AI for? Where is human judgement non-negotiable? Without explicit boundaries, it becomes all too easy to drift into uses that undermine trust. We are already seeing examples: personal messages attributed to leaders that were never written by them, synthetic data used without consent and automated decisions made with no regard for context or consequence.
The practical risks are equally pressing. Privacy concerns are significant for organisations working with vulnerable communities. Surveillance features built into many commercial systems raise questions about data sovereignty and long-term safety.
Bias persists across models, creating the risk of reinforcing inequities rather than reducing them. The digital chasm – the divide between those who can access and influence AI and those who cannot – widens with every unexamined adoption. These early questions form the ground on which responsible AI use must stand.
Once the ethical foundations are in place, attention can turn to governance. Governance does not need to be complicated, but it does need to be intentional. At minimum, organisations should have an AI use policy outlining acceptable and unacceptable uses, data handling rules, transparency expectations and decision-making accountability. This policy must be anchored in your existing values, privacy commitments and safeguarding responsibilities. AI does not arrive quietly or slowly - it arrives with a bang in the midst of our culture, relationships and obligations.
AI should be treated as a standing governance topic. That means regular reviews of tools in use, clear oversight from leadership or the board, risk registers that explicitly include AI-related risks and routes for staff and communities to raise concerns. For organisations supporting highly vulnerable people, independent ethical oversight may be necessary.
Once we have good governance in place, then we can consider how AI changes our role. The tactical, task-based aspects of our work – producing copy, checking tone, resizing content and formatting updates – will be increasingly automated. In many ways they already are. Our value will be found in judgement, ethical application, relationship building, facilitation and sense making. The why and for whom of communication will matter far more than the mechanical how. Every AI-assisted asset will still require competent oversight before it can be used.
Skills and capabilities will shift accordingly. Data literacy will - finally I hope - come to the fore, as will the ability to question outputs and interrogate claims. Critical thinking becomes central and the courage to say no when an automated option conflicts with organisational values becomes essential. Communicators will also need to act as translators between technical and non-technical worlds: explaining AI-enabled decisions, surfacing risks and advocating for stakeholders whose voices might otherwise be absent.
A key capability will be using AI while maintaining authenticity, trust and organisational values. Authenticity is not about refusing the tools; it is about being honest about how you work and remaining true to your purpose. Practitioners can use AI to support their work while ensuring outputs and engagements reflect genuine intent, real relationships and real accountability. That may mean using AI for early drafts or background research, then taking time to shape, fact-check and personalise the final version.
Trust is maintained when people feel respected rather than processed. That is difficult at a time when AI developers learn from everything we do, often without the transparency that we all deserve. Passing off AI-generated quotes as heartfelt statements corrodes trust. So does the use of synthetic images that misrepresent communities. Leaving biased outputs untouched is worse still. If content does not sound like you or misrepresents the people you serve, put it back on the shelf.
For many not-for-profit communicators – often a team of one – the hardest question is where to start. My advice is to begin small, safe and purposeful. Choose one or two low-risk, high-benefit tasks. Often the best starting point is the job that consumes the most time and gives the least satisfaction. Environmental monitoring is another practical entry point using AI to scan for rising issues or developing risks. The ability to scan widely and purposefully is one of AI’s greatest strengths.
Above all, begin with learning rather than technology. Make time to understand what AI is, what it does and where it fails. Set simple rules - anything produced by AI is checked and owned by a human, and nothing confidential or personally identifiable is entered into open tools. Once people understand how the systems work and where the boundaries lie, organisations can expand into more complex uses without compromising trust.
As someone who has worked extensively with both the not-for-profit sector and AI, it has long been clear to me that artificial intelligence can be of real benefit. It can support organisations in spaces where resources of every kind are limited and where the demand for care, advocacy and service consistently outstrips capacity. But the organisations who will use AI well are those who see it as a tool to deepen their mission not just an easy shortcut to more content. They will use automation to create time for listening, relationship building and thoughtful advocacy. They will be transparent with their communities about how and why they use these systems. Most importantly, they will invest in capability – helping their people understand AI, question it and work alongside it, rather than switching on a product, crossing their fingers and hoping for the best.
Those who fall behind will, I suspect, be found at both extremes. Some will reject AI entirely and miss out on efficiencies that could sustain their work. Others will embrace it uncritically, allowing convenience to erode trust through careless use. The real differentiator will not be the NFP or NGO with the most sophisticated tools. It will be those who keep their values steady while adapting - using AI to extend their humanity rather than replace it.