AI in nonprofits

 

Every month we run an interactive workshop for our Nonprofit Datafolk Club. It’s an opportunity for data folk in nonprofits to come together and discuss a data-related issue. 

In November, we covered a topic that has been hotly discussed in every corner of the data world (and beyond): artificial intelligence. 

As is our usual style at our Datafolk Club workshops, we asked participants to discuss three broad questions about their thoughts and experience of AI in nonprofits. 

What do you currently use, plan to use or would like to use AI for in your organisation?  

 The extent of AI use in people’s organisations was extremely varied. Some people hadn’t used it for work at all yet, due to security concerns or technical barriers (more on this below), although they might have explored AI in a personal capacity. Many said their organisations were already using AI in different ways such as:  

  • Summarising – A common use of AI was to summarise bodies of text that would take a long time to read and understand otherwise – such as policy summaries or reports – using large language models (LLMs) such as ChatGPT, Claude, or Microsoft Copilot (previously Bing Chat). Some people used machine learning tools to analyse open-ended responses to questionnaires, for example, topic modelling of a public consultation about the use of open spaces. 

  • Content creation – Another popular use of AI was to generate marketing content. People said it helped them to get started with a piece of work, get over writer’s block, or gave suggestions and direction to those not used to writing copy. It could also help to ensure that content was written using accessible language. Built-in AI functionality in customer relationship management (CRM) and fundraising software was being used by some people to predict what kind of marketing their audience would respond best to, based on behaviour patterns. However, concerns about personal data still held some organisations back from using AI to personalise communications with their audience, even though some felt this would be valuable.  

  • Technical tasks – AI was also used to support technical tasks, for example to help in writing Excel formulae or programmatic queries (such as in SQL, R, or Salesforce’s Apex). People reported that using AI as a tool in this way helped make their jobs easier. Particularly in the nonprofit sector, staff workloads are high and roles are often multifaceted, and the general feeling was that anything that can make work more efficient is welcomed. Along these lines, automation was another use of AI that people would like to implement (although no examples of this currently in practice were given). 
    Some people were thinking about using AI for data analysis. For example, the paid version of ChatGPT includes a data analysis component. This has the possibility of making complex data analysis accessible to more people. 

  • Other – Other uses that were mentioned, included transcribing audio, identifying gaps in business planning, automating administrative tasks, and using chatbots to improve website user experience.  

Despite this wide range of current and planned use of AI, there was still a strong feeling of caution amongst attendees. It was noted that there are different attitudes towards AI in the general public and within organisations themselves.  

What are your main concerns about using AI in your organisation?

  • Ethical concerns – Ethical concerns were one of the main barriers preventing people from using AI in their organisations. 
    Most of the discussion centred around large language models (LLMs) such as ChatGPT. A lack of transparency and privacy concerns about what happens to data when it gets put into such tools stopped many people from using them. People felt this inability to fully understand how their data was processed and stored by AI tools made it difficult to develop appropriate organisational policies for using them. There was a general lack of trust in the companies who developed and distributed AI tools, including the idea that companies providing these services may be storing input data for future use. 
    People also raised concerns about the environmental impacts of AI usage, and whether these could be justified. 

  • Reliability – There was also a lack of trust in the reliability of outputs. People didn’t like not being able to see the steps an LLM has taken to generate an output (as compared with a human using a recorded method), so they felt they couldn’t adequately check and validate the results. People were also aware that popular LLM tools were known for giving biased outputs that reflect the biased data that they are trained on. There were also concerns about issues raised around intellectual property, ‘deep fakes’, and harmful content that may have been fed into LLMs. People felt this needed to be addressed before they would feel comfortable in using these tools, particularly for any kind of decision-making. Related to this, were concerns about who would be responsible and accountable if decisions were made by an algorithm. 
    Several people felt it was difficult to assess the risks associated with AI, given their limited understanding of how it worked. Without understanding it they couldn’t be fully transparent about how they were using it, or guarantee errors weren’t being made in its implementation and use.  

  • Unrealistic expectations – Some people were worried that expectations around levels of productivity and performance may increase as AI tools become more widely adopted, and that upskilling may not keep pace with the rate of change. This could be stressful for those who find it difficult to adapt to the use of this technology.  
    Finally, in a sector with limited resources and where expenditure is often heavily scrutinised by donors and funders, there were questions about how much benefit AI could bring to nonprofit organisations in terms of value for money. While perhaps big tech companies with huge amounts of data and resources could get a lot of value out of AI, was this also the case for nonprofit organisations, particularly those that were smaller in size? 

What barriers do you face in using AI in your organisation?  

  • Knowledge and skills – Many people felt they didn’t understand enough about AI, and lack of digital/IT skills in general could be a barrier. People found it difficult to keep abreast of all the latest developments, as it was such a rapidly evolving area. Inability of nonprofit organisations to compete against the private sector in trying to hire people with the right skills, was also raised. 

  • Resources – Cost was a common barrier, with a shortage of funds available to pay for off-the-shelf or bespoke products, or to source the necessary skills. Lack of capacity to invest time in relevant research and knowledge-building was also an issue. 

  • Governance – Some people reported that their organisation lacked a robust AI policy, and that the absence of guidance about acceptable use held them back from being able to adopt AI tools. 

  • Culture – The complex and controversial nature of AI was another common barrier. Differing attitudes to the adoption of AI meant it was difficult to get others on board or secure leadership buy-in. There is an underlying fear (for example, “it will take my job”). One person from a local government organisation said they are typically risk-averse in these areas, so are waiting for AI integration in ‘trusted products’. On the other hand, others said they found ‘over-hype’ was unhelpful in trying to meaningfully engage with AI, sometimes pushing organisations towards using it where it’s not really needed. They also found that, amidst the whirlwind of excitement, there was little actual guidance available to help people get started. 

  • Data – People widely recognised that you need good data – and lots of it – to be able to do anything useful with AI. They acknowledged that poor quality data would result in the ‘garbage in, garbage out’ effect and render AI useless. Some said their work was still primarily paper-based and would need a digital transition before being able to make the most of AI. 

  • Service requirements – There were a few cases where people noted the importance of needing a human to carry out a service – for example, so that they could build relationships and make immediate judgement calls – particularly where organisations work with vulnerable people who need personalised support or are engaging with research participants. 

What if Chat-GPT wrote this blogpost? 

One of our Nonprofit Datafolk Club members suggested we use AI to write this blogpost. An excellent idea! I gave Chat-GPT the notes made at the workshop and asked it to write a blogpost. We’ve posted the AI-generated text on our LinkedIn feed. Why not head over to the post now and let us know what you think. Do you have observations on the copy the AI has generated? How do you see this technology being used in nonprofits in the future?  

(Full disclosure on prompts: The AI’s first attempt was pretty boring so I asked it to do it again but more like a story. It was still splitting the narrative into what each group said individually, whereas I wanted a more general picture. So, I told it not to split the notes into groups. What was generated is what you see on our LinkedIn post). 

Join the Nonprofit Datafolk Club 

If you found this resource interesting, or if you have any curiosity in nonprofit data more generally, please come and join us at our next workshop. Each month has a different topic, and you will be able to find the details on our events page. Previous topics have included: