feet with arrows
Editorial

Who’s Responsible for Responsible AI?

4 minute read
Chris McClean avatar
By
SAVED
Responsible AI frameworks will inevitably evolve, but you can start building the foundation today that sets your business — and employees — up for success.

Generative AI has spurred a democratization of sorts. Now anyone with access to the internet can experiment with AI, discover ways it can improve their lives and bring these new ideas to the workplace. The creativity and innovation could mean great things for businesses looking to drive growth and improve efficiency — but they also introduce substantial risks. The concerns about AI related to data breaches, intellectual property lawsuits and increased exposure to liability are no longer hypothetical.

A 2023 survey by Avanade of 800 business and tech leaders in eight countries revealed that over 85% of executives believe AI will increase their revenue growth. However, the same survey showed that 70% of executives believe their organization is vulnerable to reputational risk due to its current and planned use of AI.

The implications of AI extend beyond the possibility of reputational damage. Corporate leaders are paying a lot more attention to the wide range of potential ethical harms, operational losses and regulatory actions that come with the territory. According to Avanade’s survey, 77% of C-suite executives agree that safer and more responsible AI practices are among their top priorities for the next year. However, knowing that these same executives are also prioritizing business outcomes like speed and cost reduction, knowledge-sharing and innovation, or solving more complex business issues, the question is, who should be responsible for responsible AI? 

The Finance Department Model for Responsible AI

Every company will have its own unique answer to this question, based on its use of AI, its corporate culture and its existing governance structures. But as a helpful model, leaders asking  “what does good Responsible AI look like?" can look to the finance department for inspiration. Every company has a central authority for finance, who oversees the methodologies, tools, processes and specialized experts who run various financial processes throughout the organization. At the same time, most employees in a given corporation have a role to play in finance, whether they have distinct growth and margin goals, procurement responsibilities or time and expense reporting requirements. 

Responsible AI functions (e.g., Responsible Tech, Responsible Innovation) should be similarly structured, with a central authority in charge of methodology, active subject matter experts and distributed responsibility among all employees. To carry the comparison further, companies might also opt to outsource or contract out for Responsible AI expertise, and they might bring in external auditors to check their work, just like with finance.

No organization is starting from scratch. Every organization possesses the building blocks to create an accountability structure that can work quickly to mitigate short-term risks, and eventually build out to a comprehensive and robust discipline that aligns to corporate objectives, values and risk tolerance. Having an end goal in mind (e.g., emulating the finance department) can guide decisions as processes grow and mature.

Related Article: Ready to Roll Out Generative AI at Work? Use These Tips to Reduce Risk

Tips From the Leading Edge

Leaders in charge of building Responsible AI programs may also use these lessons from organizations that are already further down the path of implementation.

Harmonize Top-Down and Bottom-Up Efforts

Avanade's insights on simplifying change management indicate that effective change doesn't strictly follow a top-down or bottom-up trajectory. Instead, it encompasses three key components: the top-down “Destination,” the bottom-up “Starting Point” and “The Journey” that intertwines these two perspectives. 

This approach can be especially helpful for disciplines like Responsible AI, where most companies have an appetite for agility and experimentation on the front lines, with a desire to enforce controls and oversight further down the road. 

Harness Organizational Strengths

According to Avanade’s survey, 50% of business leaders see the supervision of AI practices as a C-suite level responsibility, while 30% view it as a department head level task, and 15% consider it a functional lead's duty. 

The reality is that it will likely fall on the shoulders of all three levels to some extent, with C-level oversight, department-level expertise and front-line engagement. Most companies can also leverage existing tools, techniques and processes by either updating them to include Responsible AI components (e.g., expanding quality assessments to include fairness testing) or using another process as a model for Responsible AI (e.g., using model risk management as a template for Responsible AI governance).

Learning Opportunities

Start With Business Alignment and Risk Appetite

In the short time generative AI tools have been available to a wide audience, they've shown the potential to dramatically change nearly every aspect of business in nearly every industry. Business and tech leaders are currently struggling to get a handle on all the new ideas, platforms and applications available, which means they’re not as engaged in formal governance or oversight as they should be.

Knowing that formal structures may take some time to come together, start by prioritizing projects that support competitive advantages while avoiding projects that risk brand damage or critical operations. From there, a more rigorous framework of Responsible AI and good AI governance can come together.

A Framework for Success

Regardless of how the Responsible AI discipline takes shape, the ultimate goal is to ensure that employees make business and technology choices that reflect the organization's values. Especially with democratized technologies like generative AI, every member of the organization should share in that ambition, understand that they have a role to play in the effort’s success and know how they can best contribute to the bigger picture. 

Encouragingly, 87% of our survey respondents affirm that their organizations are actively enabling employees to use generative AI technologies like OpenAI and ChatGPT responsibly. But we are all still learning, and these efforts toward responsibility will have to continue growing and evolving to address new technologies as they emerge. That’s why a multi-layered approach is so important for leveraging AI’s potential while mitigating the risks. Every organization will need strategic direction, subject matter expertise and a pervasive sense of responsibility to get this right.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Chris McClean

As global lead for digital ethics at Avanade, Chris McClean is responsible for driving the company’s digital ethics fluency and internal change and for advising clients on their digital ethics journey. Prior to Avanade, Chris spent 12 years at Forrester Research, leading the company’s analysis and advisory for risk management, compliance, corporate values, and ethics. Connect with Chris McClean:

Main image: Jon Tyson
Featured Research