skip to Main Content
Tips For Ai Governance

Experts share top tips for AI governance for mission-driven organisations

 

As generative AI becomes more widely accepted, many leaders in the nonprofit and charity sector are no longer asking whether they should use it, but rather where else they can use it. After all, as a transformative technology that can be employed inexpensively, artificial intelligence has enormous appeal to leaders looking at limits both in budget and staff and volunteer time.

But identifying potential uses is just one element of responsible AI governance. With questions around ethics, how does AI align with the mission and values of your organisation? How can leaders ensure the technology is truly being used for the greater good?

To find answers, we asked experts from multiple fields to share their thoughts on how mission-driven organisations can responsibly govern the use of artificial intelligence.

Expert thoughts on AI in mission-driven organisations

AI absolutely can advance the organisation’s mission, but how? Consider these musings and tips for bringing AI into your organisation mindfully.

Understand where and how AI will be used. “It’s about understanding the use cases in your organisation and how are you going to have that oversight.” — Nonie Dalton, Vice President of Product Management, Diligent

Find inspiration in how other organisations strategically incorporate AI into their work, including fundraising, communication, customer service and data analysis.

AI requires supervision. “Think of AI as a junior assistant. You would never let someone only two years out of college loose on your organisation without oversight.” — Ari Ioannides, board member at Park City Institute and founder of BoardDocs

Generative AI presents a risk of repeating the same errors leaders made in the early days of social media: either ignoring the opportunity or leaving it in the hands of less-seasoned staff. Precedents for organisation use can be established at a grassroots level, but always with C-suite awareness and approval.

AI’s potential and ethical risks cross industries, including healthcare… “Artificial intelligence in healthcare marks a paradigm shift, bringing forth groundbreaking possibilities in patient care and disease management. Its potential to transform healthcare into a proactive, patient-focused system is immense. However, these advancements carry profound ethical implications, necessitating thoughtful consideration and guidance. Our objective as healthcare professionals is not to restrict AI’s revolutionary potential but to steer its course responsibly, in a way that honors and upholds the core values of medical practice. As AI becomes increasingly integral to healthcare, it is imperative to ensure its use aligns with Hippocratic principles and responds adaptively to the evolving landscape of patient care.” — Hon. Prof. Tom Chittenden, Ph.D., Digital Environment Research Institute, Queen Mary University of London

…and education. “AI has the potential to drive learning, instruction and administrative change in education. Education has been rich in data for decades; the power of AI can transform how students learn with adaptive instruction, offer insights to teachers about their students’ learning and understanding, and enable them to make more accurate decisions for programmatic improvement, funding and connecting families to additional services servicing the whole child. As with any technology, we must protect our students’, staff’s and teachers’ personal information. A solid governance framework built around three pillars — data, IT and privacy and security — proves paramount with the use of AI in education. Leaders in schools, LEAs and SEAs must be the voice to craft and coordinate the design and implementation of sound practices and policies for all stakeholders in education, from service providers to teachers to technology staff.” — Jill Abbott, CEO, Abbott Advisor Group

Issues around generative AI are rife in every industry, but there are enough parallels to see trends in both opportunities and cautionary tales. No matter your organisation’s area of service, there are best practices for approaching the technology wisely.

Start with an AI framework. “If your board/leadership does not have an AI framework in place, they should. This should include strategy as well as policies. Get some training, get some help. Take a look at the NIST (National Institute of Standards and Technology) framework — it has a seven-page playbook with an AI framework you can use as a starting point.” — Richard Barber, CEO and board director, Mind Tech Group

A solid AI framework will provide guidance on how to implement the opportunities and avoid the risks of the technology, and you won’t need to start from scratch. Another framework to consider is Diligent’s 7 Steps to AI Governance for Mission-Driven Organisations, which offers a shortcut to reasonable adoption of generative technology.

The time to establish policies is now. “Judgment is key. The governance team needs to have a process in place and a policy for how to use AI. You want to make sure that there’s a policy in place and there is a procedure for how to treat these tools, because it’s not intuitive.” — Dominique Shelton Leipzig, Partner, Mayer Brown

“A lot of times I think we’re scared to put a policy in place because we don’t want to get it wrong. But having a policy in place that you can revise provides guidance, and that’s better than a vacuum.” — Matt Miller, educational speaker and author

Today, the transition from introduction to adoption to saturation of new technology is actually a speed run, so many boards are familiar with the need to establish new policies for these tools. Ensure your board’s policy process is efficient and reflects current and future needs. A robust board management platform can help simplify the policy development and approval process.

Expert help is available. “Making sound, ethical decisions on artificial intelligence for your organisation is imperative. Our certification program helps board members and executives like you navigate the ethical and technological issues inherent in AI, so you can steer your organisation toward sustainable, trustworthy practices.” — Dottie Schindlinger, Executive Director at Diligent Institute and founding team member of BoardEffect

As with any new technology, engaging a partner can be the best course of action for an organisation. Look for a partner that has extensive experience with organisation needs and offers secure software that supports collaboration — like Diligent.

Embrace the possibilities. “Because of artificial intelligence we are living in a science fiction book right now. We have the opportunity to define what that science fiction book becomes … It’s a time when we can’t be passive.” — Sal Khan, educator and founder, Khan Academy

So, when is the right time to start building your organisation’s governance for generative AI?

Today.

Read our other expert-led articles: 

The benefits of proactive AI governance for nonprofits

It’s likely AI has already touched some aspect of your organisation’s work. But not only is it not too late to develop guidance around the technology, it’s imperative. The use of generative AI has significant ethical implications, from introducing risk of data misuse to potentially running afoul of global privacy regulations.

By establishing a framework and thoughtful policies around use now, nonprofit leaders can ensure AI aligns with their missions, and their organisations can maintain the trust of members, donors, volunteers, staff and other stakeholders.

We at Diligent have been exploring the possibilities of AI from day one and are excited for the opportunities it presents mission-driven organisations. We’ve developed BoardEffect with the needs of volunteer boards in mind as they collaborate strategically and thoughtfully around this new technology.

Jennifer Rose Hale

Jennifer Rose Hale has over 20 years' experience with digital and employee communications in for- and nonprofit environments. Her writing and client areas of expertise include education, finance, science and technology.

Back To Top