Jockey's in colorful outfits of red, white and blue, ride masked horses in a race in story about generative AI history.
Editorial

Generative AI History: Jockeying for Dominance

11 minute read
Frank Palermo avatar
By
SAVED
Dive into Part 3 of our comprehensive series on generative AI history. Who will dominate this high-stakes race for technological supremacy?

The Gist

  • Global AI race intensifies. Countries worldwide, led by the US and closely followed by China, are heavily investing in AI, with nations like Singapore, Canada, South Korea and Israel making significant advancements. 
  • Tech giants' role. Major US companies like Microsoft, Alphabet/Google and OpenAI have been at the forefront of generative AI innovations. Microsoft's transformation under Satya Nadella and its partnership with OpenAI exemplify its AI leadership. Meanwhile, Google's advancements with LaMDA and PaLM models showcase its commitment to staying competitive in the AI landscape.
  • Emergence & evolution of AI startups. The AI startup ecosystem is booming, with companies like Frame AI, Jasper, Tome and Galileo AI introducing innovative AI-driven solutions. These startups are reshaping industries, from customer feedback to content creation, indicating a vibrant future for generative AI applications.

In this third part of our four-part generative AI history series, we'll look at how the major players are emerging in the global AI race. While the US currently dominates, we have seen new leaders quickly emerge. China and other nations are investing heavily as they realize the stakes and rewards are very high for AI leadership. We are beginning a new type of cold war.

Inside CMSWire's Generative AI History Series

Generative AI History: Countries Across the Globe Have Been Jockeying for AI Dominance for Years

Despite the recent surge in Generative AI platforms, the global AI race has been raging for many years now. Countries across the globe are pouring huge investments into AI and vying for global leadership positions.

US Dominance

The US continues to retain a dominant position in AI — with China close behind. However, if you take a look at generative AI history, the rest of the field has been quite volatile over the past several years. For instance, the UK shifted from third place in 2020 to fourth place in 2021. It was displaced by Singapore which saw significant growth over the past few years moving from 10th place up to sixth in 2021. Other countries like Canada, South Korea, Israel, Germany, Switzerland and Finland are also making significant strides in AI technology and currently round out the top 10 countries.  

Not surprisingly, the top companies throughout generative AI history — including Microsoft, Alphabet/Google, OpenAI, Nvidia, Adobe, Amazon, Facebook/Meta, Tesla and IBM — are concentrated in the United States. These large tech giants will continue to lead innovation with significant R&D investments in AI. We are likely to see cycles where each of these players leapfrog each other’s dominance in generative AI.

China Close Behind

China has its own set of tech giants in the AI space. Baidu, China’s largest search engine, has entered the AI race with its ChatGPT equivalent Ernie Bot. Alibaba is in the internal testing stage of developing its own ChatGPT-style chatbot.

JD, an ecommerce company, is launching ChatJD. ByteDance, the parent company of TikTok, has initiated research and development into generative AI tools through its AI Lab. Telecommunications provider Huawei has recently patented technology similar to ChatGPT, that would analyze existing libraries of data to generate responses to user questions.

The speed at which China’s generative AI boom continues will largely depend on the government’s regulation of AI technology. Initial reports say the interim regulations due to be put into effect in August were far less onerous than measures outlined in the April draft. This could be a sign that China believes winning the global AI race is critical for its economy and global positioning.

Related Article: It's Not Bert, It's ERNIE: Baidu Unleashes Rival to ChatGPT

An Unlikely New Era in Generative AI History for Microsoft 

If you take a look at generative AI history, Microsoft’s brand has not always been synonymous with AI.

For many years Microsoft was caught up in the operating system and desktop application race that it dominated so well, it brought much scrutiny from the government over monopolistic practices. However, as the cloud, open source and platforms era began to transform the industry, Microsoft looked out of place. If it wanted to survive, it needed a massive paradigm shift.

The glass and steel facade of the Microsoft building in Issy-les-Moulineaux, France with greenery in front along with a sign bearing the Microsoft name and logo of four squards in pimary colors of red, blue, yellow and green, in piece documenting generative AI history.
Microsoft's products have a long history with generative AI. HJBC on Adobe Stock Photos

A Transformation Under Satya Nadella 

In 2014, Satya Nadella took the helm as CEO (only the third CEO in the company’s 40-year history) to begin one of the tech industry’s greatest transformations. He quickly reoriented the company to go on the offense in new emerging areas of technology such as cloud computing and AI and empowered every person to explore new opportunities. This set the course for making generative AI history.

He created a startup atmosphere and sponsored some of the world’s largest private hackathons. This resulted in a rebirth of Microsoft and the launch of new, innovative platforms such as Azure that would quickly become the foundation of the company’s product strategy moving forward.

Bill Gates & His AI Ambition

It’s also easy to forget another chapter of generative AI history — Bill Gates created Microsoft Research 25 years ago with the ambition of creating general AI-based computing, i.e., computers that could interact with humans with the ability to see, hear and speak. The current team of more than 1,000 highly decorated experts in computer science, physics, engineering and mathematics continues to provide significant innovation capacity, especially around AI.

Smart acquisitions and partnerships throughout generative AI history have been instrumental to Microsoft’s transformation strategy. Instead of the previous legacy acquisitions like Nokia’s smartphone division, Microsoft focused on technical pioneers such as the workplace social media platform LinkedIn, the developer platform GitHub and the monster video game deal with Activision Blizzard.

Microsoft quickly capitalized on these acquisitions. Github’s Copilot tool, which uses AI to suggest code fragments to developers, was launched in an open marketplace in October 2021 and quickly attracted over 10,000 companies. 

Microsoft & OpenAI

Microsoft also established a long-term multiyear partnership with OpenAI, which included multibillion dollar investments to accelerate AI breakthroughs. This creates a powerful force, with Microsoft and OpenAI both working in the artificial intelligence space to quickly innovate and take product to market.

Microsoft has also done an incredible job in infusing AI into its entire product line to provide intelligent experiences every day in Bing Search, Windows, Xbox, Microsoft 365, Teams, Dynamics 365 and many other products.

This has culminated in Microsoft emerging as one of the early leaders in the AI race.

Related Article: Say Goodbye to the Waitlist: Microsoft Bing Is Enhanced and Fully Open

Google Generative AI History: Getting Back Into the Game

Google has a long generative AI history dating back to 2014 when it acquired DeepMind for $500 million to complement the AI research expertise in Google Brain. DeepMind was famously responsible for creating the computer systems that beat top-ranked players of the Chinese board game Go.

A sign with the Google Name and Logo in Blue, Red, Yellow and Green with headquarters building and trees visible in the background in story about generative AI history.
Google has been in the long-game when it comes to generative AI history.Picturellarious on Adobe Stock Photos

Google & DeepMind

By combining the expertise of Google Brain and DeepMind, Google’s goal was to streamline its AI research efforts and avoid duplication of work. In 2015, RankBrain was now being used in Google Search to provide more relevant search results for users and avoid the gaming of search rankings. In 2017, the Google AI division was formed that pioneered many programs such as Google Brain, Tensor Flow, AlphaGo, Transformers, and more recently LaMDA (Language Model for Dialogue Applications).

However, DeepMind had been running independently and typically working on AI concepts that didn’t make it into core Alphabet products. That all changed in April 2023, when Sundar Pichai, Alphabet CEO, announced the groups would formally be merged. According to Pichai, “Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI.” This was most likely a result of Google appearing to be in the background while GPT was first launched, despite quietly working on similar technology.

Google's LaMDA

LaMDA, announced in 2021, was a major breakthrough as a conversational LLM with the ability to power dialogue applications to generate human-like conversations.  LaMDA was trained on conversational dialogue, which allowed it to pick up on several nuances that distinguish open-ended conversation from other forms of language. This allows it to ascertain if the response to a given conversational context makes sense. LaMDA was one of the first AI chat services to be unveiled.

Google's PaLM

An even more sophisticated model called PaLM (Pathways Language Model) was first announced in April 2022 and remained private until March 2023. PaLM is a family of language models developed by Google that are designed for large-scale language generation tasks.

The name comes from the concept that new “pathways” can be developed to allow a single model to potentially accomplish millions of tasks. The model uses a technique known as “few-shot learning” that lets it learn from a limited number of labeled examples (or shots) to help it quickly generalize new tasks with minimal data labeling. Future AI models will rely on multiple senses (i.e., input mechanisms) to digest and interpret information, mimicking human senses and behavior. Previously, many machine learning systems overspecialized in individual tasks, when the real opportunity is to excel at many tasks simultaneously.

Learning Opportunities

Similar to GPT, PaLM is designed for large-scale language generation tasks, such as machine translation or content generation. PaLM was trained on a large body of text (over 780 tokens) that included sources such as articles, books, webpages, Wikipedia content, social conversations and open source repositories. PaLM was reported to be over 1,000 times more powerful that its predecessor BERT.

Google's PaLM 2

Then at this year’s Google I/O on May 10, Google unveiled the next generation language model called PaLM 2 with improved multilingual, reasoning and coding capabilities. PaLM 2 was trained on over 100 different languages and large body of open source code bases. The training material also included a large body of scientific papers and mathematical expressions to enhance logic and reasoning capabilities. PaLM 2 can understand idioms, poems, nuanced texts and even riddles in other languages.

PaLM 2 allowed Google to catch up in the AI race. PaLM 2 is a newer model than GPT-4. However, the GPT-4 model is said to be trained on 1 trillion parameters, making it least 10 times larger than PaLM 2. However, the smaller size of PaLM 2 is an advantage for certain applications that do not have as much onboard processing power.

The AI race is now in full swing, and we can fully expect the velocity at which these LLM’s evolve to continue to increase. Google isn’t showing any signs of slowing down and already has its next big model in development called Gemini. It will draw upon AlphaGo’s reinforcement learning algorithms and marry that with the amazing language capabilities of the large models.

Another Piece of Generative AI History: Whatever Happened to IBM Watson?

For over a decade of generative AI history, IBM’s Watson was the only AI game in town. From high profile public displays of beating chess and Jeopardy! champions it looked as if the future of AI was in the hands of IBM. For many people, Watson became synonymous with AI. It was aggressively marketed and routinely featured on high profile news programs such as “60 Minutes.”

The names of IBM and Watson are in white lettering on a slate blue board on the gray stone facade of the IBM headquarters building in story about generative AI history.
After a rocky generative AI history, IBM is redefining Watson's mission. MichaelVi on Adobe Stock Photo

A Jeopardy! Success — a Healthcare Fail

After the public success of beating Jeopardy! champion Ken Jennings in 2011, IBM turned its Watson focus on applying AI to healthcare. Healthcare is the nation’s largest industry with spending rising worldwide so it seemed plausible at the time. Watson was supposed to revolutionize how healthcare was delivered while offering a potential of living longer, healthier lives. However, over the next decade a series of missteps would derail that mission.

The problem was, Watson set out to tackle some of the most ambitious topics like finding cures and recommending care for cancer. This proved to be a monumental task as the technology was not yet developed to handle this level of general-purpose complexity. In hindsight, it would have been much more practical to focus on narrow topics like adverse drug reactions, etc.

Watson had been custom built for dedicated purposes like answering questions on a quiz show. This was very powerful but limited technology. Many people thought Watson was a readymade answer machine. It turns out the complexity and gaps in the medical and genetic data provided made it very difficult for IBM technologists to program Watson. Physicians grew frustrated, wrestling with the technology and validating results rather than caring for patients. After four years and spending $62 million, the Oncology Expert Advisor was abandoned in 2016 as a costly failure. 

Despite this, IBM continued to invest, creating a separate business unit called Watson Health in 2015. It spent over $4 billion to acquire companies with medical data, billing records and diagnostic images on hundreds of millions of patients. What started out as a revolutionary mission, ended in selling off the parts to a private equity firm Francisco Partners in 2021.

Redefining Watson's Mission

IBM has now redefined its Watson mission to provide an enabling set of tools and platforms to help organizations infuse AI into their business. Watson X was announced by IBM in May 2023, which includes capabilities for AI-generated code, an AI governance toolkit and a library of thousands of large-scale AI models, trained on language, geospatial data, IT events and code.

Watson also has a ChatGPT-like assistant called Watson Assistant that is built on machine learning and natural language processing and also leverages large language models. However, Watson is much more than just an AI chatbot interface powered by an LLM as it was created to ascertain intent through conversational AI. Watson incorporates datasets of many input-output pairings, which it uses to understand what a good response looks like and what the user intent is. According to IBM, Watson now boasts a 79% accurate intent detection algorithm.

IBM is also realizing that value of open partnerships as it is partnering with HuggingFace, a high profile AI startup and open source platform that reached a $2 billion valuation last year. IBM decided that open source should be at the core of Watsonx.ai. which is built on RedHat OpenShift and is available in the cloud and on-premise. Many of the HuggingFace open-source libraries will now be available through Watson.ai.

So while IBM Watson has had a precarious journey through the AI minefield, it is alive again and poised to participate in this next wave of AI innovation.

Related Article: Google Ushers in New Age of AI Driven Advertising: What Marketers Need to Know

Making Generative AI History Today: A New Startup Frenzy

The AI startup frontier is possibly the most active since the dawn of the Internet. Every day there are new companies emerging and existing companies repositioning their products into the AI space.

Just as OpenAI went from relative obscurity to front-page news, there are literally hundreds of AI startups gaining momentum. Here are a few examples of leading-edge AI startups redefining how work gets done.

Startups Gaining Momentum

  • Frame AI is building a customer success platform to provide better feedback loops between products, customers and service. It is focused on identifying speaking behaviors and meanings through service interactions to create real-time AI driven sentiment scores to enable better product and service headcount decisions.
  • Jasper is an AI-based writing platform designed to aid content creation for bloggers, marketers and businesses. Jasper generates original, top-notch content suitable for blogs, marketing copy and product descriptions by inputting basic information.
  • Tome is a new platform for creating and sharing ideas. Users can create and share interactive, multimedia documents such as ebooks, presentations and reports. Users can easily add text, images, videos, audio and other interactive elements to their documents.
  • Galileo AI is a co-pilot for user interface design. It creates delightful, editable UI designs from a simple text description, empowering designers to create designs faster than ever.

These are just a few examples of companies that are using AI to change current paradigms in how work gets done, how content is created how ideas are shared — which is changing generative AI history. While they may not reach the success of an OpenAI they are pioneering change and setting the foundation for AI powered applications and platforms for the future.

Up next in this series: Generative AI is still in very much in its infancy. In the final part of our series we will explore some of the issues surrounding the use of generative AI technologies and how these can be addressed in the future.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Frank Palermo

Frank Palermo is currently Chief Operating Officer (COO) for NewRocket, a prominent ServiceNow partner and a leader in providing enterprise Agentic AI solutions. NewRocket is backed by Gryphon Investors, a leading middle-market private investment firm. Connect with Frank Palermo:

Main image: Tarikh Jumeer on Adobe Stock Photo
Featured Research