Navigating AI risks & governance: What your nonprofit board should know
Individuals in every workplace are finding ways to save time with artificial intelligence (AI) tools. But when these uses are not guided by organizational policy, nonprofits face risk.
This is why every nonprofit board should understand the issues around AI, develop acceptable-use policies and investigate ways to use AI responsibly and effectively in fulfilling its mission.
This was the message of a recent webinar, “Navigating AI Risks & Governance: What your nonprofit board should know.” In it, experts in nonprofit AI and BoardEffect leaders discuss what your board must understand now about AI — as well as the many benefits nonprofits of every size and budget can realize with the technology.
Watch the full 45-minute discussion here, and read on for some of the highlights.
AI: What are the basics boards should know?
First, some background on how AI works. Artificial intelligence — from home devices like Alexa and Nest to web tools such as ChatGPT, Microsoft Copilot and Apple Intelligence — take existing data and generate answers to user questions and prompts. It can seem magical, but these tools work by quickly processing existing data. Meanwhile, the provenance of the existing data can raise questions and concerns about AI applications.
Darian Rodriguez Heyman, Founder and CEO of Helping People Help and author of the upcoming book AI for Nonprofits, observes, “The biggest thing that groups need to know is that when you enter a prompt into ChatGPT, whatever you’re putting into that prompt, it becomes part of the training data that the next iterations of this AI technology are learning from.”
However, AI isn’t limited to public tools. Private options, such as the built-in AI tools like those now available in BoardEffect, can keep data safe.
How are nonprofits using AI now?
Most nonprofits have not realized the value of using AI today. Whether cautious due to AI horror stories or simply having not yet seen the benefit, many nonprofit boards have avoided adopting AI use policies or finding ways to incorporate AI into the work of the mission.
Nonie Dalton, Vice President of Product Management at Diligent, observes that this is a potentially “naive approach to take, because I guarantee you that there’s somebody, that champion within the organization, who is already using AI.”
Individuals within these organizations, Heyman adds, are “coloring outside the lines because there are no lines drawn yet. And very few nonprofits have actually taken the time to articulate a policy, acceptable-use guidelines and other sorts of parameters around their strategic use of AI. (Meanwhile,) individuals who are interested in dabbling with this technology are playing around with it to great effect, and that is absolutely helping nonprofits.”
Also, the lack of AI incorporation is not about big versus small, in staffing or in budget. Heyman notes that it’s more about risk tolerance and willingness to experiment.
What are the risks of AI?
The use of AI is not without its risk, and every board should understand these risks before establishing policy.
- Exposing sensitive data. Unwitting users may not understand the risk of uploading, for example, donor data to a public AI tool to generate letters. Once entered, that data becomes part of the tool’s training data.
- Using AI-generated material without human oversight. AI requires oversight. It is only as good as the data put into it, and even then, it is limited to its core function. Heyman explains: “Today we’ve got what are called hallucinations; the bots don’t know the difference between what’s true and what’s not.” Heyman notes that tools like ChatGPT are transformers, “which means it’s literally just predicting the next word that will make you happy. So it’s really critical that we keep an eye on the results.”
- Using AI without an organizational policy. By establishing policy, Heyman notes, boards reduce or eliminate the danger of a team member, for example, uploading sensitive data and then asking a question because they “simply don’t know better.”
How can your nonprofit use AI now?
AI has significant applications for nonprofits today. Dottie Schindlinger, Executive Director of the Diligent Institute, offers an example of a research project she conducted for a local nonprofit and how AI saved her time: “I did individual interviews with each director about their board service. And I ended up with about 110 pages of notes from all these interviews conducted over the course of a couple of months. I wanted to pick out the key themes but do it in a way that was anonymized.
“My first instinct was to use ChatGPT because it allowed me to very quickly synthesize and summarize the notes, but also to ask the AI engine what it thought were the most pertinent and relevant themes. So I wasn’t biasing the answers, and what I got back was really eye-opening and interesting.”
Heyman notes that AI is a “general enabling technology, akin to the steam engine or the internet as a whole. It can be helpful with all things.” He lists three buckets to consider:
- Automating rote functions. From writing donor acknowledgment letters to crafting board meeting minutes, nonprofits can realize significant staff and volunteer time savings.
- Data analysis. AI can take a “mountain of data,” as Heyman notes, and dive into the trends and insights quickly, with limited human intervention.
- Predictive analytics. In the case of fundraising, for example, Heyman notes that AI can tease out “the five donors you should be following up with. Here’s a pre-written email for each of them that speaks to asking about their kids and highlighting our accomplishments in their particular areas of interest, et cetera.”
How can the board move forward with the strategic use of AI?
It may seem ironic, but Heyman notes that using an AI tool to create a first draft of acceptable-use policy could be a good first step to understand what AI does best — take information from multiple sources and assemble it into a first draft, creating a starting point for discussion.
With BoardEffect, Diligent has taken a security-first mindset, Dalton explains, in which users’ data is kept secure. One of the first features users can explore is AI Board Book Summarization. With AI, Board Book Summarization you can generate an executive summary of the meeting book materials, allowing administrators to skip straight to the review stage of book preparation. AI Meeting Minutes and Actions generates relevant, high-quality meeting minutes from board materials, typed notes and transcripts.
For boards that are ready to take the next step, the Diligent Institute offers AI Ethics & Board Oversight Certification, for additional insight into the intersection of AI and board responsibility.
Want more tips and insights from Diligent’s experts? Watch the full webinar.