Good AI governance supports tangible business objectives through four pillars. In this series, we're scoping out each pillar in turn to establish a sketch of what good AI governance looks like:
The four pillars:
- AI System Inventory
- AI Use Case Inventory
- AI Regulatory Crosswalk
- AI Governance Roadmap
We’ve already explored the first three pillars: how to approach AI system and use case inventories as well as how to develop an AI regulatory crosswalk based on them. Today, we’ll deep-dive into how to tackle an AI governance roadmap that ties them all together.
Assess Current State and Identify Gaps
At this point, you’ve documented what AI systems are in use and for what business processes and cross-referenced these with the applicable laws and regulations your organization is subject to, based on industry and jurisdiction. The AI regulatory crosswalk that resulted details all the specific legal and regulatory requirements your organization needs to meet to effectively govern your uses of AI.
Building an effective roadmap will require you first to determine:
- Which requirements you are currently meeting
- Which you are only partially meeting
- Which you have plans to meet and
- Which you are not meeting.
This need not be complicated: A straightforward Excel spreadsheet with columns for status, date started, date complete/planned to complete, owner and so on, will suffice.
With this done, you can group the three types of requirements you are not currently meeting into big buckets to make addressing them easier. For example, if multiple use cases on your crosswalk have a regulatory requirement for a risk management system (e.g., Colorado AI Act and the EU AI Act), these can all be grouped under a single “implement risk management system” requirement. If the specifics of each risk assessment differ, then you can account for this in the design and development of a unified risk assessment that meets all the individual risk assessment regulatory requirements while applying distinctive controls where needed.
Plan the Work
Now that you have the rationalized list of big bucket legal and regulatory AI governance requirements, you can plan the work by sequencing your efforts to address them against a roadmap. Timelines will differ from organization to organization but, in general, 18 to 24 months is a good timeframe to aim for.
Start by identifying the projects needed to address each of the big bucket legal and regulatory requirements — people, process and technology. You will likely need to break each down into the concrete steps needed to address them.
For example, to “implement a risk management system,” you will need to:
- Design the components of a risk management system called for in the laws and regulations.
- Determine which components are already fully or partially in place and define the discrete tasks needed to fully implement them.
- Identify any in-flight efforts that contribute to the implementation of these requirements.
- Finally, sequence the efforts on the roadmap, taking into account in-flight efforts, resource and funding constraints, and organizational capacity for change.
Once you’ve slotted each individual big bucket against the schedule in light of the larger organizational context, you need to balance them all against each other to determine the final order of how to address them. In doing so, you’ll have to make decisions about timing based not only on what’s right for the AI governance program, but what’s feasible given the organizational context. This typically will lead to a significant prioritization effort to align the needs of the AI governance program with the capacity of the larger organization to support the changes driven by the roadmap.
From there, you need to do the hard work of estimating the costs and benefits of executing the roadmap. These can be — and typically are — directional. Unlike a finance or sales transformation project, the return on investment is difficult to measure in hard dollar savings or revenue.
Despite that, you can definitely estimate the “T-shirt size” costs and benefits of your AI governance efforts, with costs being the easiest. The number of internal resources needed, at what level of hours, for what duration is the simplest calculation to get some or all in costs of corporate folks working on the AI governance efforts. Combine that with estimates for professional services dollars to augment internal staff, and you have your people costs. Add to that rough software and hardware costs for any technology needed and you are 80% of the way there. Then, estimate the change management and communication efforts needed (and often overlooked), whether delivered by internal or external resources.
The benefits of AI governance efforts are more difficult to quantify, even at the “T-shirt size,” directional level. Risk avoidance justifications are really a “color of the bus that didn’t hit me” exercise: You can cite historical fines or legal and regulatory fine schedules, but unless the worst happens, your organization won’t ever pay them, rendering your investment ROI negative.
Define the Risks of AI Governance Noncompliance
A better approach is to foster a discussion about the risks independent of (or larger than) the direct costs of a regulatory action, lawsuit or breach. Reputational damage to the organization, opportunity costs for resources needed to address the regulatory action, breach or lawsuit, and the personal liability of key executives are all significant drivers of investment in AI governance (and information governance generally).
The first step in elevating the risk management discussion past direct costs is to get all stakeholders to agree that compliance is a business decision, not a “must do.” That is, whether and to what degree to comply with laws and regulations isn’t a given; it’s a decision made with eyes wide open to the costs and benefits involved.
With this alignment accomplished, you can “T-shirt size” the legal and regulatory risks of AI governance noncompliance, from outside counsel spend and breach costs, to retraining or disgorging core AI applications, to the likely costs of regulatory actions and lawsuits, to the personal liability of key executives.
While these aren’t hard costs by any means — there’s no guarantee that any of them will come to pass — they are good indicators of the potential costs of underwriting the risks of AI governance noncompliance … and, if articulated and communicated effectively, will drive decision-making in favor of AI governance at the executive level.
Work the Plan
With all this done, you can now work the plan and execute on your AI governance roadmap to implement successfully. Even with all these pieces in place, the doing will be a challenge, but with a roadmap tied to concrete legal and regulatory requirements based on your actual use of AI, pegged to “T-shirt sized” risk numbers, you will be well placed to gain the support and resources you need to succeed.
At this point you’ve seen all the steps needed to establish the four pillars of an AI governance program:
- AI System Inventory
- AI Use Case Inventory
- AI Regulatory Crosswalk
- AI Governance Roadmap
Done right, these will get your AI governance efforts established on a strong foundation and on the path to success.
Learn how you can join our contributor community.