Microsoft headquarters in Redmond, Washington, with Microsoft and logo on a wall.
News Analysis

Microsoft's AI Access Principles Are All About Its Market Position

6 minute read
David Barry avatar
By
SAVED
Microsoft's AI Access Principles are driven by market position rather than self-regulation. Here's why.

Microsoft unveiled its AI Access Principles earlier this week at the Mobile World Congress in Barcelona. The stated purpose of the 11 principles is to foster competition in an innovative market that is open to all.

It's just the latest in a series of moves to regulate AI, many issuing from within the very companies building and profiting from the technology.

Self-regulation does not come easy to any tech company. The bigger the company, the more products spanning numerous tech spaces it will have, the more difficult it will be to self-regulate. Regulation of the AI space is going to be even more difficult — but not impossible. And successfully or no, some of America’s top tech companies are trying.

AI Regulation, From Within and Without

One example came last August when some of the major players in the AI space, including Microsoft, Anthropic, Google and OpenAI among others, created the Frontier Model Forum (FMF), focused on ensuring the safe and responsible development of AI models. 

Federal government has also put pressure on the industry to regulate AI, which resulted in President Biden signing an executive order last October. The order created new guidelines to ensure the “safe, secure and trustworthy” development of artificial intelligence.

To some degree, all of this is a bit theoretical, even idealistic, as it is going to be very difficult to decide what constitutes safe AI and what doesn’t.

For the tech firms, there is a practical aspect to all of this, notably the commercial self-regulation of the market to avoid the wrath of government regulators everywhere. Microsoft has recent experience in this area, having unbundled Teams from Microsoft 365 and Office 365 suites in Europe in an effort to appease EU regulators looking into its domination in the enterprise collaboration space. Microsoft is still under investigation by the EU into potential anti-trust practices, and so can likely imagine the future possibility, if not probability, of major investigations into its commercial practices vis-a-vis generative AI.

Related Article: Why Regulating AI Is Going to Be a Challenge

Microsoft's AI Access Principles

So what's going on with Microsoft's AI Access Principles? Microsoft president Brad Smith outlined the 11 principles, divided into three broad subject areas:

  • AI Models and Development: Microsoft states it will provide develops access to AI models and tools and expands its infrastructure to enable better training and use of AI models.
  • Choice and Fairness: Offers AI developers access to public APIs to models hosted on Azure; creates a public API for network operators; gives developers the ability to sell their AI creations directly to customers or through the Azure marketplace; supports the export and transfer of data when customers switch from Azure to another cloud developer.
  • Social Responsibilities: Develop data centers to keep AI models and applications safe from both physical and cybercriminal threats; applies Microsoft's own Responsible AI Standard to any development; investing in AI skill building; working on environmental impact of AI growth.

Two things stand out here. The first is Microsoft's self-described role in building an AI economy as being benevolent, rather than monopolistic. “They build in part on the lessons we have learned from our experiences with previous technology developments,” Smith wrote in a post about the principles. 

The second point, also mentioned in the post, is its new partnership with Mistral AI. In the last week of February, the Paris-based AI startup announced a new large language model that could rival OpenAI’s GPT-4. At the same time, it announced a distribution deal with Microsoft, which included a $16.5million investment in Mistral by the latter.

While it is only a drop in the ocean compared to its investment in OpenAI, the announcement alone has antitrust regulations in the EU bristling, with many demanding an antitrust investigation into the deal. This comes as the EU continues to investigate the OpenAI investment.

Related Article: What ChatGPT in Microsoft 365 Could Spell for the Workplace

Gatekeepers at the AI Doors

While self-regulation — and indeed any attempt at regulation — is admirable, it doesn't address the realities of the AI market and competition in the market. Microsoft, Google and to a lesser extent AWS do not want to be out-positioned by market entrants at a time of significant change in the technology landscape, noted Matt Mullen, lead analyst for AI applications at Deep Analysis.

One of the purposes of regulation is to avoid a similar situation to the one that occurred when social media first appeared.

“If we think back to the previous one around social, they all got out-manoeuvred by new market entrants, in part because the space wasn't defined in such a way as to regulate it from the get-go,” he said. "This is why they are all so keen on establishing regulation right away, not to restrict their own activities (which to an extent, it will), but to make it far more difficult and expensive for market entrants to be compliant with those new regulations. Nobody wants another Facebook muscling their way to the top table, thank you very much.”

The tech giants are effectively “gatekeeping” the space, he said. This explains Microsoft's attitude towards new-comer Mistral for example, knowing that it is quickly becoming necessary for these strategic alignments with other companies to take place.

While there's nothing within the Access Principles that sound onerous, he continued, they take a position whose end result pretty much says, “[this is] how we will enable you to operate for both users and developers alike in a benevolent dictatorship that we control.”

Open source models will not be immune to such pressures, Mullen continued. They too will have to play nice within the regulatory frameworks or risk being shut out of big chunks of the operating landscape.

“We're not 100% there yet, but what we see here from Microsoft is the laying out of the ways of working that they are proposing to those big trade blocks,” he said.

Related Article: AI Insight Forum Raises Problems From the Start. One Is Closed Doors

It's a Business Blueprint

Info-Tech Research Group's Brian Jackson also sees in the Access Principles the beginning of an outline for how Microsoft could proceed in the market over the coming years.

“I think Microsoft's 'access principles for an AI economy' could have also been rightly titled ‘here's how we plan to make money on AI,’” he said. “It reads like a laundry list of Microsoft services ranging from its APIs to its cloud hosting infrastructure and professional services. It details who it wants to partner with to sell its technology to the market and who it wants to partner with as customers, too.”

He added that framing the Principles as being about "access" is a clever strategic communications approach given the company is currently under the microscope of regulators for monopolistic practices, even as it continues to invest in startups that will play a major part in the AI economy.

Those kinds of  investments, Jackson added, might give regulators reason to wonder if they should review these deals, especially if they worry it will limit market access to AI.

Learning Opportunities

“So, Microsoft is putting a good PR spin on its business strategy for the next several years,” he said. “But I don't want to lambast them too much — they are clearly optimistic about the economic benefits AI will drive and thinking about how to play a positive contributing role in providing it and facilitating its adoption.”

The impact of these principles is mostly for Microsoft's own business units, to give them a strategic vision on how to govern AI and related technologies as they build them out and deploy them with customers.

The secondary impact is on all the different stakeholders in Microsoft's platform ecosystem, ranging from customers to partners to governments. The principles demonstrate how Microsoft intends to approach AI from a strategic viewpoint so they can anticipate how that impacts them accordingly.

It is for this reason, Jackson said, that to describe Microsoft — or any for-profit company, for that matter — as "benevolent" would be inaccurate in just about any scenario.

"Microsoft is a firm that is acting in the best interests of its shareholders, which is to expand its business and make profits in a sustainable and resilient approach," he said.

The Need for Regulation

Government oversight, evolution of the guidance and standards for AI safety, security and fairness need to evolve quickly to ensure that consumers and citizens are protected, said Gaurav Pal, CEO and founder of stackArmor.

The US has taken some initial steps in helping frame a standard like the NIST AI Risk Management Framework, but a lot more work needs to be done to turn that into an enforceable requirement.

Pal argues that to improve on the foundation set by these principles, there should be more acknowledgement and focus on the responsibility of the platform providers to provide not only open access to services, but also ensure the safety and security of the services and products delivered on the AI platforms and AI marketplaces.

“We’ve seen an increasing need for platforms to have greater responsibility for ensuring safe and secure applications are delivered,” he said. “It is for those reasons every AI application or service provider should have an explicit Authority To Operate (ATO) safety & security governance model that ensures an auditable series of steps have been taken to deliver safe solutions."

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Peter
Featured Research