Committees don’t always have great reputations. But increasingly, organizations are finding value in AI Councils.
How many of these have you heard?
- A camel is a horse designed by a committee.
- Committees kill unconventional ideas for a living.
- A committee is the only form of life with a hundred bellies and no brain.
Committees and councils have been the butt of jokes forever. But if your company is thinking about stepping up its game with an AI Council, there’s ways to keep it from becoming another joke. Here’s advice from people who have been there.
Table of Contents
- What Is an AI Council?
- What Are the Benefits of an AI Council?
- Who Should Be on an AI Council?
- AI Council Best Practices
What Is an AI Council?
First of all, what is an AI Council? An AI Council brings together people from across the company to share knowledge and concerns about AI, as well as to correct misconceptions and help the company set up its AI initiatives safely and ethically.
“An AI council enables a company to step back and define a set of first principles and guidelines to align its AI transformation across the business,” said Mitch Berk, chairman of Omnissa's AI Council. “It also provides a team that can review the countless AI initiatives springing up across the business and ensure they are aligned with these principles, introduce only appropriate risk and are not redundant with other initiatives."
With representatives on the council from legal, compliance, product management and data science, Berk added, they get multiple perspectives, which brings with it the ability to anticipate potential issues and adjust accordingly.
Related Article: How to Build an Ethical AI Policy That Actually Works
What Are the Benefits of an AI Council?
An AI Council gives the company some control over AI initiatives, rather than a series of stealth projects.
“If you have an institution, it has to come out and take a stance on AI,” said Brian Arnold, editor of the Humane Technologist and former chair of the National University AI Council, a San Diego organization where he was a professor in educational technology. “When people aren’t sure they start sneaking their AI. Once they start sneaking their AI, the institution doesn’t have any accurate understanding of the depth and breadth of the tool’s use.”
Moreover, Arnold added, AI without the governance of an AI Council could put an organization at legal risk. Depending on what that institution does, a user could unintentionally divulge proprietary information, trade secrets or personally identifiable information, because they weren’t aware it’s not okay to use on an open large language model (LLM).
“Even employees with the best intentions may inadvertently introduce risk to the business by not fully vetting the AI tools they leverage, whether those tools are provided by the business or used by employees individually,” Berk agreed.
Who Should Be on an AI Council?
Like any committee, an AI Council is only as good as the people you put on it.
“You need at least one tech expert,” recommended Russ Wilcox, CEO of Artifexai, chair of the AI Council for nonprofit Lives Amplified and chair of the policy committee for the American Society for AI. “Not to influence decisions but to act like a therapist and try to get out of the stakeholders, what are their thoughts and what are their misconceptions.” Other possibilities include a member of the C-suite, who might be interested but doesn’t have the resources to activate their network, he added.
At the same time, it’s important to avoid people with special interests who could dominate the committee, Wilcox cautioned. “They start poisoning the narrative based on internal bias. One cofounder can mess the whole thing up.”
Berk added, “Recruit cross-functional members by having leaders from various departments nominate representatives, ensuring diverse perspectives and expertise."
It’s also possible for an AI Council to be too popular. When National first created its AI Council, it had more than 50 people, because everyone wanted to associate themselves with it, according to Arnold. In addition, he said, some people chosen for their clout weren’t necessarily the best choice. “The reality is, those people are extremely busy and this isn’t part of their job.” The council ended up swapping out some members of leadership with lower-ranking people who were more vested and more available.
“Start with the agitators, the champions, the ones you try to shut down because they’re constantly going on about this thing,” said Arnold. “Getting the right players, not the usual players, is key.”
Which brings up another point: membership on the AI Council needs to be recognized as part of the person’s job, with time devoted to it. “If the institution wants something like this to work, you have to give it resources,” noted Arnold. “It’s not going to be effective in people's ‘spare time.’ The more efficient they are, the less spare time they have. This isn’t something people can get to as a side thing. It needs to be embedded in their expectations.”
Related Article: AI Governance Isn’t Slowing You Down — It’s How You Win
AI Council Best Practices
Choose a Goal
The first step, even before choosing the members, should be to determine the AI Council’s goal, Wilcox said, recommending what he called “backwards design.”
“Never do an AI project just for the sake of AI,” he advised. "What are you trying to accomplish? If I were a teacher trying to teach my students, I would write the exam before I teach those students so I can scaffold what I need to teach them so they have the greatest amount of success.”
Similarly, knowing the organization’s AI goals — whether it’s aligning competing initiatives, developing a “moonshot” project to raise $100 million in revenue or automating the monotony in a department — helps determine the council’s focus and who should be members, Wilcox said. “Once you have the roadmap, you can put the right people in the room.”
Start From the Grassroots
Also, while the AI Council itself might have been an executive initiative, when it looks at projects, it should start from the grassroots, experts advised.
“Top-down approaches tend to lead to failure,” Wilcox said. “If I’m a worker and someone says, ‘We’re going to use this AI tool, there’s lots of risk there. Flip it and start to understand, are there people currently using AI? How are they using it? Because that’s going to give you an idea of how to implement it.”
Arnold agreed, adding, "Support and curate grassroots efforts and build from those, rather than starting top-down. If you start top-down, people with boots on the ground see it as yet another initiative that, if they ignore it long enough, it’ll go away. And they’re not wrong.”
Like any project, create an actionable outcome, Wilcox said. “What’s the first thing that could legitimize this and empower people to get excited?” he said. “What is the first step of creating a proof-of-concept that could scale? Then you measure its effectiveness. Treat it like a startup.”
Stay Adaptable
Finally, setting up an AI Council isn’t a static process.
“The technology, capabilities and risks are moving so quickly that rules and decisions will have to be revisited and reinterpreted frequently,” Berk said. “If a team is clear on their principles of what is important and why, they can quickly apply them to the rapid changes in technology and adapt guidelines accordingly. AI is moving so fast that whatever policies or frameworks you build will need to evolve. Be prepared to iterate. Don’t think of the council as a one-and-done process. Think of it as a living system that evolves alongside technology.”
Like AI itself, an AI Council isn’t magical, Arnold said. “It will reflect the strengths and weaknesses of the institution,” he said. “Whatever flaws or problems you have will be mirrored in this structure.”