Running parallel to the development of generative AI and its use in the workplace is another discussion that is becoming increasingly important for enterprises: What will be the impact of new AI technologies on business and on society as a whole?
As concerns surrounding the technology and how it may be used in the future grow, a group of tech giants — Microsoft, Anthropic, Google and OpenAI — have gotten together to create the Frontier Model Forum (FMF), an industry body focused on ensuring the safe and responsible development of frontier AI models.
A look into what that means and where we go from here.
Frontier Models
The Frontier Model Forum defines frontier models as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks.
Following its creation, the Forum's next task is to establish an advisory board to help guide its strategy and priorities, and establish institutional arrangements, including a charter, governance and funding with a working group and executive board to lead these initiatives.
In a statement announcing the creation of the Forum, Anna Makanju, vice president of global affairs with OpenAI, said that it is essential for the safe development of these models that those who are developing them are working from a common base.
“It is vital that AI companies — especially those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” she said, stressing the urgency of the work and underlying the Forum's unique positioning to act quickly to advance the state of AI safety.
Can Generative AI Self-regulate?
While the Forum is a needed step, the question is: Is it realistic to expect the very companies that are developing these technologies to effectively regulate their development? As the market develops and competition to produce the most effective models intensifies, is it not likely that these companies will push beyond whatever boundaries the Forum creates?
The Frontier Model Forum is laudable in its aims but isn’t by any means the whole answer to safety concerns on AI, Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey told Reworked. “The AI industry isn’t mature enough to be allowed to self-regulate,” he said, likening the effort to "putting the foxes in charge of the chicken coop."
He believes there is a fundamental problem with the idea that some profit-driven AI developers will be responsible for setting the bar on AI safety research. That safety, in any industry, he said, needs to be assessed independently of the suppliers. “Standards need to be set, audits undertaken, accountabilities enforced by an independent body."
The problem, he argued, is that this matter needs international leadership and consensus, and governments are a long way behind the current achievements of companies developing AI capabilities. As an example of governments' shortcomings, he cites the gaps in current regulation, from data protection to copyright law, from control of personal information like photographs, name and voice to corporate accountability.
Rogoyski said to gain greater credibility in its ability to carry out its mission, the FMF should commission independent studies — from academia, safety agencies and independent consultancies — and openly publish the results. “There is too much at stake," he said, pointing to potential for AI to be used for great good, from economic growth to medicine and health care, to climate change.
There are also extant threats, including using AI to create disinformation in elections, disrupting industries and jobs, or even educating terrorists in lethal technologies. “We can’t let a handful of unelected organizations control our collective futures through the implementation of AI," he said.
“If governments do form independent AI agencies to ensure safe AI, we need to ensure that the FMF doesn’t become [a] vehicle for regulatory capture.”
Related Article: Who’s Responsible for Responsible AI?
A Laudable First Step
Still, many in the industry welcome the FMF as a laudable first step.
While it's easy to look at a governing body made up of representatives from the industry as biased, Patrick Kopins, COO of OvalEdge, a data governance consultancy and end-to-end data catalogue solutions provider, argues the opposite.
It is to be expected, he said, that the Forum will have the industry's ongoing success as its primary driver, but there is a lot at stake if it fails. And self-governing bodies are not just common, they can also be quite valuable.
In addition to bringing a collective expertise on the most comprehensive and up-to-date information on the subject, self-governing bodies can mitigate intervention from governmental institutions, Kopins said. If the Forum fails to address the core concerns of AI development and put guardrails in place, the government will no doubt intervene — and this intervention could come with severe restrictions and penalties for companies.
“AI, and particularly the generative AI industry, is under the spotlight. There is huge public interest in the technology,” he said. “So, unlike some other self-governing bodies, the Forum's work [is unlikely] to slip beneath the radar. It will be scrutinized by the media and, ultimately, consumers."
Nevertheless, he agrees there should also be a government-led regulatory framework to oversee the development of AI technologies, but this doesn’t necessarily imply conflict with the FMF. “A far better outcome than one regulatory body managing the responsible development of AI technology is having multiple public and private bodies working in unison to achieve manageable targets,” he said. “However, for this to work, the Forum must be transparent.”
In the future, if the FMF delivers actionable recommendations to the central government based on trustworthy data and continues to reveal how the technology has and will develop, the government may use this information to inform its policies, he said.
If it doesn't, then the AI industry will have policies thrust upon them, which could ultimately hinder innovation and decelerate the speed at which consumers realize the benefits of the technology.
The Importance of Guidelines
VP of ecosystem and business development at software company Tabnine, Brandon Jung said it is possible to make this all work. The requisite: very clear guidelines.
“It requires a very clear definition of terms, technologies and also data and an equal voice at the table from all, he said. “[The] key will be actively seeking out contrarians.”
To illustrate, he listed three concrete things that the Forum must do:
- Define terms and trade-offs
- Ask those in the industry to be transparent
- Publish the names of the organizations that do and do not meet the terms
“From here, only if required, I think the industry has to move to regulatory,” he said. “It’s so very important to move quickly and with transparency. No doubt about it, this will be met with much resistance from many in the industry.”
Related Article: Take a Breath on the AI Thing
Gaining Trust
Understanding that generative AI has the capacity to produce both beneficial and harmful content on a large scale, safety and ethical issues are critical, Simon Ryan, CTO at Australia-based Firstwave Technology, said.
It is difficult, however, to determine which businesses are trustworthy when it comes to keeping an eye on their generative AI developments. While some are actively tackling issues by putting safeguards, moral standards and cutting-edge detection algorithms in place, their efficacy is still being closely examined, Ryan said.
He believes collaboration between stakeholders is necessary to establish a balance between innovation and responsible usage, including tech businesses, researchers and regulators. “As generative AI develops, ensuring transparent and responsible monitoring procedures becomes essential for establishing and preserving public confidence in AI systems."
There are, of course, other initiatives underway to try and regulate the industry. At the end of July, a group of tech companies — including the founder members of the FMF — agreed to new AI safeguards after a White House meeting with President Joe Biden. Commitments from that meeting included watermarking AI content to make it easier to spot misleading material such as deepfakes and allowing independent experts to test AI models.
It is unlikely, though, that this will be enough to offer the kind of protection that many people inside and outside the industry are looking for.
For now, the FMF has only just been created, so it is early days to predict what it will do. One thing is for sure: It will take a lot to convince critics that self-regulation by the very industry that is developing the technology is enough.