guardrails on an open highway
Feature

2023: The Year Generative AI Governance Came Into Its Own

6 minute read
David Barry avatar
By
SAVED
While governance has always been an important part of the digital workplace, the emergence of generative AI made it essential. Here's why.

As the reality surrounding the use of generative AI kicked in during 2023, it raised questions around how to incorporate governance into the process. The urgency become clear as several high-profile cases raised by content owners came before the courts.

Hollywood Reporter editor Julian Sancton filed a case in late November over claims that Microsoft and OpenAI used his and other authors' nonfiction books to train their large language models (LLMs). Then on Dec. 27, The New York Times filed its own lawsuit against the two technology firms for copyright infringement.  

Actions like these understandably raise concerns for organizations, especially those with LLMs already in use in the workplace, around whether they can trust LLMs and whether there is any transparency into how they are trained. 

If You Haven't Created an AI Governance Plan, Start Now

Despite the fact that some of the larger tech firms have promised to cover any legal costs incurred should use of their LLM result in breach of copyright law, the potential reputational damage resulting in being associated with such a lawsuit has given many enterprise leaders reason to question the wisdom of LLM deployments. This dilemma has become one of the stumbling blocks as we finish up 2023 and head into a new year.

Brian Jackson, research director at Info-Tech Research Group, shared data from its Future of IT survey that underlined how significant a problem this is.

“We saw that about 30% of organizations have not taken any steps towards deploying AI governance so far,” he said. “That is understandable if you are just at the stage of considering where AI will help your business, but as we start to see new AI features roll out through established enterprise software vendors, we can no longer afford to wait. The time is now to create a responsible AI framework with principles that reflect organizational values.”

This framework should serve as a guiding structure for implementing human-centered AI as pilots are pursued and operational deployments made. He pointed to one by now infamous example from 2023 that underlined the need for such a governance framework: lawyer Steven A. Schwartz submitted a legal brief full of non-existent citations to a Manhattan court judge. Schwartz used ChatGPT to help draft the brief and while his punishment was a $5,000 fine, the global embarrassment he suffered for being the first person caught using AI to cheat at his job will be hard to shake, said Jackson.

One positive result of the story was the public spotlight it placed on the risk of AI hallucinations and the reminder generative AI can make mistakes, fabricate information and put it forward as fact, he continued. “This helped us all understand the implications of over-reliance on generative AI's outputs for critical work,” Jackson said.

The OpenAI leadership drama brought further attention to the need for governance — in this case corporate as well as information — and showed the world that the same people that are creating AI are also worried about the existential risk it poses, said Jackson.

Finally, this year, the European Parliament progressed its Artificial Intelligence Act and reached a provisional agreement in the second week of December. The bill in its current form provides further direction on how regulators are likely to handle AI not just in Europe, but around the world as other jurisdictions are modelling its policies.

Related Article: What We Can Learn From Zoom's AI Training Data Snafu

Benefits and Complications Will Only Grow

The problems will continue in the new year as well, said Amazon engineer Mayank Jindal, predicting that the problems will get worse as generative AI is merged with other technologies, including augmented reality, virtual reality and the internet of things.

While such integrations are likely to result in benefits such as AI-assisted medical diagnoses and personalized content creation, it will make governance more complicated, he warned. The advancements underline the necessity for increased accuracy, particularly in high-stakes sectors like healthcare and finance.

Achieving such accuracy will require investing significant resources and efforts into governance to enhance the reliability and trustworthiness of AI models. The focus on accuracy is a crucial step in ensuring the safe application of AI in sensitive domains.

Moreover, the potential misuse of generative AI, such as the creation of deepfakes, underscores the need for stringent AI regulations and robust data privacy measures. Ensuring the ethical use of AI technology will be a major focus necessitating the development of advanced AI detection tools and security measures.

As the corporate world increasingly turns towards generative AI, it will hopefully be matched with a corresponding increase in AI training and literacy. “This will not be limited to tech professionals. A basic understanding of generative AI will become an asset in various job roles," he added.

Related Article: AWS's Diya Wynn: Embed Responsible AI Into How We Work

Ethical AI Usage Comes to the Fore

Even still, 2023 marked a significant turning point in the integration of AI governance in the workplace, said Spectrum Search CTO Peter Wood. He pointed to the development of stringent regulatory frameworks rolling out in the coming months and the heightened awareness of ethical AI usage as highlights.

The frameworks aim to ensure that AI technologies in the workplace are used in a manner that is not only efficient but also ethical and transparent. “This was a game-changer for businesses, as it necessitated a re-evaluation of how AI tools are deployed and managed,” he said.

Furthermore, the emphasis on ethical AI usage brought about a cultural shift in the workplace. Companies started to prioritize responsible AI practices, which included addressing biases in AI algorithms and ensuring data privacy.

This was particularly relevant in the field of AI and ML-led recruitment, he said, where maintaining the integrity and fairness of the recruitment process became paramount. Businesses began to recognize that ethical AI practices are not just a regulatory requirement but a cornerstone of their reputation and brand value.

These developments are likely to result in a more conscientious approach to AI deployment, emphasizing the need for transparency and accountability. Businesses will have to adapt by integrating comprehensive AI governance strategies, which included regular audits, employee training on ethical AI usage, and the establishment of dedicated AI governance teams.

But rather than hindering growth, Wood is quick to note the evolution in AI governance not only mitigated risks but also enhanced the efficiency and effectiveness of AI applications in the business environment.

Related Article: Are You Giving Employees Guidelines on Generative AI Use? You Should Be

Governance Concerns Slow Generative AI Adoption

Adoption at scale has been slower than expected as a result of concerns around governance, said Manvir Sandhu, chief innovation officer at Zennify. Rather, he said it's been a prolonged “wait and see” mode, where the technology is advancing so rapidly that leaders are patiently waiting to catch the optimal wave.

Steering committees are already concerned about governance, where the lack of standards and best practices are a barrier. In this respect, he pointed to the practice of organizations that blocked tools like ChatGPT from employee usage due to concerns about data security and privacy.

Learning Opportunities

Marketing and analytics teams are concerned that their LLM solutions have not been tested and proven relative to bias and hallucinations and are lacking the prompt engineering and data science/analytical resources to drive model confidence.

In regulated industries such as financial services, chief security officers are often the gatekeepers to adoption and scale, and in the case of AI the lack of regulatory oversight and familiarity with AI are impeding progress where some organizations are even hesitant to deep their toes in the ocean waters.

“It’s a work in progress, and some organizations are increasingly adopting ethical frameworks and principles to guide the development and deployment of AI technologies in the workplace,” he said. “In response, Zennify advises organizations to implement smaller iterative pilots and proof of concepts with use cases that can prove demonstrable value, enabling them to familiarize themselves with the tech, effectively aggregate data, and experience the LLM brain learn and articulate. Organizations with digital agility and AI focused programs and associated governance steering committees will win early and will have the best chance of safely and securely catching that first epic wave."

Related Article: Can We Trust Tech Companies to Regulate Generative AI?

Get Ready for a Wild Ride

The insatiable demand for generative AI will make 2024 a wild ride, said Contentsquare's Dave Anderson, but rules of engagement, oversight and regulation might play a larger role than we think. 

“I believe it will come down to when someone decides to take the technology too far, into a use case that polarizes the population. [Here] it will reach a tipping point, where advancing AI at 500% will come into question,” he said. “It might be security, might be personal identity abuse, military or health related."

"But I sense a tipping point might be reached, and at that point we must decide what regulations and what parameters we feel should be in place to protect the future of business, privacy, security and more generally humanity.”

Tech leaders, he said, are going to have to start taking regulation more seriously, which many have. He finds it encouraging to see leading governments proactively start to consider their national perspectives coming through, with Biden’s executive order one of the most recent.

Ethics will also play a very big role in AI, as it did with the data privacy discussions a decade ago. “If it seems risky, if it seems wrong, then it must be paused. People need to be empowered to speak up, to down tools. We must adhere to this thinking, because a 'win at all costs' strategy, will not just jeopardize the future of tech, it might jeopardize the future of humanity.”

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Hogarth de la Plante | unsplash
Featured Research