A wooden man turns the clock back, representing an historical look into artificial intelligence.
Editorial

AI's Long History Sparks a New Generation of Applications

8 minute read
Frank Palermo avatar
By
SAVED
AI's history deserves a peek, because it's setting us up for a future of innovation thanks to the transformational arrival of generative AI.

The Gist

  • Historical perspective. Generative AI history spans over seven decades, culminating in innovative applications impacting various industries.
  • Economic impact. Generative AI could potentially add up to $4.4 trillion to the global economy.
  • Business transformation. Emerging use cases include customer operations, marketing, sales, and software engineering, offering opportunities to enhance processes and productivity.

In this four-part series we will explore how the more than seven-decade history of AI has systematically paved a path for the current wave of generative AI applications to become a reality and how this is now creating a tipping point for AI that may be as transformational as the launch of the internet.

The potential economic impact of generative AI is huge. Recent research by McKinsey has estimated that generative AI could add up $4.4 trillion to the global economy.

Generative AI has the potential to change the way we all work. Not by replacing workers but rather augmenting the capabilities of workers by assisting in research, automating mundane tasks, creating foundational content, validating hypotheses, streamlining processes and many other areas.

While generative AI has the potential to drive impact across a wide variety of business functions, the most impactful use cases that are emerging include customer operations, marketing and sales and software engineering. These areas represent huge opportunities to improve processes and productivity.

However, the era of generative AI is still in the early innings. While the technology will continue to evolve at a rapid rate, there is still much to sort out around AI governance, impact to the global workforce, AI ethics, protection of intellectual property and many other important topics.

Let’s dive into the series and have a look at how the history of AI has paved the future of the generative AI space and what that means.

Part 1: The Past Creates the Future

To understand the potential and future direction of AI, it’s critical to understand the past. The first part of the series focuses on the origins of AI and the numerous eras of evolution, both positive and negative that have culminated in this new era of generative AI.

AI Was Destined to Link Man and Machine

Artificial Intelligence (AI) has a long, rich history of over 70 years. During this time it has gone through many ups and downs vacillating between taking over the human race to large visible failures and setbacks.

Generative AI platforms such ChatGPT and other large language model (LLM) platforms have once again put AI front and center. Ironically, many people believe these capabilities have appeared overnight and don’t understand the complex lineage, depth of research and innovations that occurred enabling these platforms to come of age.

AI as a concept has been around since ancient times where inventors created “automations,” which were mechanical machines such as artificial servants and surveillance systems that could move independently of human intervention.

A Greek mythology character, a silhouette of Zeus using a modern laptop computer on an orange background in the style of ancient amphora painting in piece about the history of artificial intelligence.
Artificial intelligence (AI) has a long history. strudel_art on Adobe Stock Photos

The term “artificial intelligence” was first coined by John McCarthy, a cognitive scientist from Princeton and CalTech who is credited as one of the founders of the discipline. AI was established as an academic discipline in 1956.

Modern AI didn’t really emerge until the mid 20th century when pioneers like Alan Turing who worked at the National Physical Laboratory (NPL) to create the Automatic Computing Engine (ACE), which was the first complete specification of an electronic stored-program that led to the first computer. His work in machine intelligence led to the development of the Turing Test, which became the standard for accessing a machine’s ability to exhibit intelligent behavior equivalent to that of a human.

The 1970s were a time of rapid AI research at several institutions such as MIT, Stanford, UC Berkley, Carnegie Mellon and others funded mainly by the Defense Advanced Research Projects Agency (DARPA). Many industry labs were born during this time at corporations like RAND, Bolt Beranek and Newman (BBN), SRI International and others.

The “AI Winter” hit later in the decade as researchers started to run into limitations with compute and storage capabilities. This resulted in reduced funding and consumer interest that lasted until the early 1990s. Many expert systems programs such as the Fifth Generation Computer Project (FGCP), Expert Configurer (XCON) and others were cancelled.

The late 1990s saw a resurgence of AI interest, culminating in the highly publicized defeat of reigning world chess champion and grand master Garry Kasparov by IBM’s chess playing supercomputer, Deep Blue, in 1997.

Over a decade later, a subsequent historic event occurred when IBM Watson competed on Jeopardy’s quiz show against their two biggest all-time champions, Ken Jennings and Brad Rutter, and won.

More recently, in 2017, Google’s AlphaGo defeated Ke Jie at what has been called humankind’s most complicated board game, Go. This furthered the application of AI technology by demonstrating another way expert computer systems can perform better than humans in highly complex tasks.

Over these past eight decades, the power and sophistication of AI platforms has increased. The constraint has been in the computational power needed to train these systems. For the first six decades, training computing increased in accordance with Moore’s Law, doubling approximately every 20 months. However since 2010, this has been doubling every six months, which is a big driver in the acceleration of AI systems.

Today AI is very prevalent in many of the applications we use and rely on in our everyday lives. For instance, all social platforms used by billions of people are all powered by very rich AI platforms for recommendations, targeting and engagement. AI is helping industries forecast travel, predict weather patterns and navigate driving routes. Facial recognition is prevalent in our phones, photos and videos we consume. Chatbots and AI assistants power many of the service interactions with our favorite brands. Payments we make are now being used to simplify the payment and settlement processes. In fact, there is probably not a thing you do in your daily routine that is not powered by AI.

Related Article: Journey Through Time: How Chatbots Have Evolved Over the Decades

Generative AI’s Foundation as a Communication Platform

Generative AI and LLMs can be traced back to the 1950s when the field of AI was first established. ELIZA was an early example of a LLM created in 1966 at MIT by Joseph Weizenbaum. It explored the relationship and communication patterns between humans and machines and used basic pattern matching and substitution methodology to give an illusion of human understanding.

The initial use case was psychotherapy, specifically person-centered therapy called Rogerian Therapy, which is founded on the idea that people are motivated by positive psychological functioning.

The reason this use case worked so well is that the illusion of intelligence works best in limited conversation to talking about yourself and your life. It looks for keywords in a user’s statement and then reflects it back in the form of a simple phrase or question or generic prompts like “tell me more.”

Generative AI had a major breakthrough in 2014 when Generative Adversarial Networks (GANs) were introduced by Ian Goodfellow and other researchers in a paper. GANs are a new machine learning architecture that creates two neural networks who contest each other (hence the term “adversarial”) in a zero-sum game.

GANs consist of two major parts, a generator and a discriminator. The generator’s role is to create new data instances that resemble the training data. The discriminator tries to distinguish between the real data instances and the synthetic data created by the generator. The generator and discriminator are trained simultaneously but in an adversarial manner playing a game of cat and mouse. This iterative process continues until the generator creates data that is indistinguishable from the original data and the discriminator cannot tell the difference between the two. This moment is called convergence and represents a state of equilibrium where both the generator and discriminator have learned.

Learning Opportunities

GANs play a critical role in the generative learning process and while they may not directly power LLMs like ChatGPT, they play a role in enhancing training data generation, data augmentation, dialogue refinement and improving conversational flow.

This new era of generative AI is really the first successful interface layer of language that exposes a powerful set of new tools and APIs. The power of this is stimulating a lot of thought around ways to rethink business models, interaction models and change the productivity of work and the overall economics of business. This is creating a step-change in the performance of AI and its potential to drive enterprise value.

Related Article: Generative AI Timeline: 9 Decades of Notable Milestones

The Evolution of Chatbots Culminates in Modern AI Apps

There has been a long history of trying to provide easy access to AI to consumers. Chatbots have been around since the '60s but were more aptly called computer programs and weren’t really accessible by the general population. Nearly four decades before the current wave of chatbots arrived, there was a computer program called Racter that could generate English prose. It was created by Mindscape and famously generated the text for the published book “The Policeman's Beard Is Half Constructed.”

And then there was Jabberwacky, which aimed to "simulate natural human chat in an interesting, entertaining and humorous manner." It was created by Rollo Carpenter in 1988 and used an AI technique called contextual pattern matching. It relies on the principles of feedback and it learns from active dialogue. It is not finite, or rules based like traditional chatbots so it can be taught slang, games, jokes and many other non-traditional language traits.

In 1995, A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) was created by Richard Wallace. It was the first real interactive interface for engaging in conversation by reacting to human input. It won numerous awards for responding reasonably naturally to human interaction.

From there we had the first wave of voice chatbots such as Siri, Alexa, Cortana and Google Now beginning in 2010. These were all intelligent personal assistants that used a natural language UI that really paved the way for all AI chatbots after that.

Despite this evolution, many of the voice assistants continued to frustrate users as the accuracy and usefulness was not always there. They were capable of understanding basic commands and responding to simple questions but lacked large language comprehension capabilities. These were also typically locked to a particular hardware platform and not readily accessible across devices.

The history of public chatbots has not all been positive. Back in 2016, Microsoft launched its first AI chatbot called Tay (an acronym for Thinking About You). Tay started off innocent enough replying to various users' tweets but quickly went off the rails when it started to post offensive and inflammatory tweets causing Microsoft to shut it down after only 16 hours post launch. While subsequent launches were attempted, Tay never really saw the light of day again but went on to influence Microsoft’s future AI practices.

ChatGPT Changed the AI Chatbot Game

The evolution of LLM’s have certainly broadened the vocabulary, dialogue and overall sophistication of chatbots. While sophisticated LLMs have existed for several years, there was still not a convenient app for users to engage with an AI platform.

This all changed on Nov. 30, 2022, when OpenAI, an American-based artificial intelligence research laboratory, released ChatGPT. ChatGPT marshaled in a new breed of AI applications that provide conversational interfaces to assist in a wide variety of tasks such as essay writing, code creation, content summarization and many others.

It rapidly became the world’s fastest growing application and became a catalyst for a global race on AI.

The past decades and advances in research have helped elevate AI from a promising idea to an essential capability in people’s daily lives. This inertia will continue to propel the technology further to make even more impact in the future.

Up next in this series: we'll explore the foundation of generative AI.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Frank Palermo

Frank Palermo is currently Chief Operating Officer (COO) for NewRocket, a prominent ServiceNow partner and a leader in providing enterprise Agentic AI solutions. NewRocket is backed by Gryphon Investors, a leading middle-market private investment firm. Connect with Frank Palermo:

Main image: Нелли Овчинникова
Featured Research