Employees work on computers at desks in an office with paper airplanes hanging from the ceiling.
Feature

What Is Artificial Intelligence (AI)? A Guide for Business Users

10 minute read
Sharon Fisher avatar
By
SAVED
A complete guide to AI technology.

Everywhere you look, companies are touting their use of artificial intelligence (AI). People are wondering if AI will take over the world, release all the drudgery from our lives, take away jobs, create jobs, destroy human creativity or make us more creative. The answer lies in between. But what even is AI?

Artificial Intelligence Explained

What is Artificial Intelligence?

“AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving and even exercising creativity,” McKinsey & Company says.

That doesn’t mean machines actually think or are creative. Instead, it means simulating human intelligence through the use of algorithms, data and computational power, according to the University of Illinois in Chicago. To do that, AI systems need to learn the different kinds of algorithms they will use in the future. There are several ways that this learning can happen.

Supervised vs. Unsupervised Learning

Traditionally, AI systems learned by having a human explicitly give an AI system information, a process known as supervised learning, according to IBM. It involves using labeled data sets to train algorithms to classify data or predict outcomes accurately by having humans pair each training example with an output label. “The goal is for the model to learn the mapping between inputs and outputs in the training data, so it can predict the labels of new, unseen data,” IBM says.

While this method was reliable, it was also slow. Moreover, in a number of ways, supervised learning gives the AI system the same limits as a human, because the human needs to provide the labels for input and output variables, according to IBM.

As AI systems have become more complex, they have also become capable of unsupervised learning. Rather than being told directly how to categorize data, the systems were simply fed large amounts of data to make their own conclusions. “They can automate the extraction of features from large, unlabeled and unstructured data sets and make their own predictions about what the data represents,” IBM says.

One of the classic examples AI learning happened in 2012 at Google, which the company describes in a blog post. “If we think of our neural network as simulating a very small-scale newborn brain and show it YouTube video for a week, what will it learn?” says Jeff Dean, then a Google senior fellow and SVP of Google AI, and Andrew Ng, then a visiting faculty member. “Our hypothesis was that it would learn to recognize common objects in those videos.”

And it did. “Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of … cats,” Dean and Ng say. “Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat. Instead, it discovered what a cat looked like by itself from only unlabeled YouTube stills. That’s what we mean by self-taught learning.”

Removing the human from the equation enabled AI systems to learn much more quickly, because it was no longer limited by the speed, insights and work hours of a human. “Because deep learning doesn’t require human intervention, it enables machine learning at a tremendous scale,” IBM says. In particular, unsupervised learning is well suited to natural language processing (NLP), computer vision and other tasks that involve the fast, accurate identification of complex patterns and relationships in large amounts of data, according to IBM. The downside, though, is that as with any unsupervised situation, “unsupervised learning methods can have wildly inaccurate results, unless you have human intervention to validate the output variables,” IBM says.

Other types of learning include semi-supervised learning and reinforcement learning, according to the software developer WEKA. Semi-supervised learning is a combination of supervised and unsupervised learning. It provides the learning algorithm with unstructured (unsupervised) data, while it includes a smaller portion of labeled or structured (supervised) training data. “This often supports more rapid and effective learning on the part of the algorithm,” WEKA says.

Reinforcement learning is often used in simulated and game environments, where the AI system gradually learns the best action to take in a specific setting, according to WEKA.

In addition, there is self-supervised learning, which generates implicit labels from unstructured data rather than relying on labeled data sets, and transfer learning, in which knowledge gained through one task or data set is used to improve model performance on another related task or different data set, according to IBM.

Artificial General Intelligence vs. Artificial Narrow Intelligence

Some people also break the definition of AI down further, between artificial general intelligence and artificial narrow intelligence. “Artificial general intelligence (AGI) was coined to describe AI systems that possess capabilities comparable to those of a human,” McKinsey says. “In theory, AGI could someday replicate human-like cognitive abilities, including reasoning, problem solving, perception, learning and language comprehension.” But that’s decades away, at least, McKinsey believes.

In comparison, the type of AI we’re typically accustomed to today in systems such as Siri and chatbots is artificial narrow intelligence (ANI), also known as “weak AI.” “Weak AI or narrow AI are AI systems limited to computing specifications, algorithms, and specific tasks they are designed for,” according to AWS. “For example, previous AI models have limited memories and only rely on real-time data to make decisions. Even emerging generative AI applications with better memory retention are considered weak AI, because they cannot be repurposed for other domains.”

This video by TED offers an additional explanation of AI technology by Microsoft AI CEO Mustafa Suleyman, who co-founded DeepMind and Inflection

See more: 10 Top AI Certifications for Pros Without Technical Backgrounds

What are the Features of Artificial Intelligence?

Depending on what they’re intended to do, AI systems have different features, but they generally have several in common:

  • Machine learning: Focuses on using data and algorithms to imitate the way that humans learn, gradually improving its accuracy, according to IBM
  • Deep learning: A type of machine learning; processes data in a way inspired by the human brain, including recognizing complex patterns in pictures, text, sounds and other data, to produce accurate insights and predictions, according to AWS; neural networks perform a similar function
  • Natural language processing: Understands human language rather than requiring programming, according to Cloudflare
  • Computer vision: Identifies and understands objects and people in images and videos, according to Microsoft
  • Content generation: Generates, optimizes and repurposes content, such as text, images, audio, and videos, according to Copy.ai
  • Data analysis: Analyzes large data sets, simplifies and scales trends and uncovers insights, according to Google
  • Predictive analytics: Uses data analysis to make predictions about future outcomes using historical data combined with statistical modeling, data mining techniques and deep learning, according to IBM

These functions form what’s called the data layer, which prepares data for AI applications. AI systems have a multi-layered model as well, according to AWS. The other two layers that function on top of the data layer are:

Model layer or foundation and large language models (LLMs) that perform complex digital tasks. Foundation models are deep learning models trained on data that perform tasks based on user inputs. In addition, organizations can customize existing foundation models with internal data to add AI capabilities to existing applications or create new AI applications, according to AWS.

Application layer, the customer-facing part of AI architecture. It allows end users to interact with AI systems, such as completing specific tasks, generating information, providing information or making data-driven decisions.

How Does Artificial Intelligence Work?

The development of AI systems includes several steps, according to HubSpot:

  • Input: Providing data and the desired outcomes to the AI system; the data can be text, images, video and audio; the data can be structured and unstructured
  • Processing: The AI system takes the data and interprets it, using the algorithms it has learned to recognize patterns in the data
  • Data outcomes: Predicting the outcome
  • Adjustments: If the AI is failure, adjusting algorithms and running the AI again
  • Assessments: Analyzing the data and the result to see what the AI learned

Why is Artificial Intelligence Important?

In the same way that computers themselves are an improvement over tasks human beings typically did, AI is an improvement over the way computers used to perform tasks. Here’s some examples, according to Colorado State University:

  • Automation: Automates a repetitive task previously done manually
  • Enhancement: Makes products and services smarter and more effective with capabilities, such as improving customer service and delivering better product recommendations
  • Accuracy: Uses data to become more accurate than humans, making better decisions
  • Analysis: Analyzes data more quickly than humans, allowing it to find patterns more quickly; can also analyze more data than humans, allowing it to find patterns humans might miss
  • ROI: Makes data more valuable, because AI does a better job analyzing complex relationships

The big reason why AI is important? Money. IDC predicts in a report that between adopting AI, using AI in existing business operations and using AI to deliver better products and services to business and consumer customers, AI will have a cumulative global economic impact of $19.9 trillion through 2030 and drive 3.5% of global GDP in 2030.

Moreover, between increased spending on AI solutions and services driven by accelerated AI adoption, economic stimulus among AI adopters — such as increased production and new revenue streams — and increasing revenue along the AI supply chain, the result will be that by 2030, every new dollar spent on business-related AI solutions and services will generate $4.60 into the global economy, in terms of indirect and induced effects, according to IDC.

“As a result, AI will affect jobs across every region of the world, impacting industries, like contact center operations, translation, accounting, and machinery inspection,” IDC says. It’s not surprising that according to IDC figures, 98% of business leaders view AI as a priority for their organizations.

AI is offering companies the opportunity to increase productivity – which has been decreasing around the world in the past decade – on a global scale, says Michael Spence, a Nobel economics laureate, in an article for the International Monetary Fund.

“AI is our best chance at relaxing the supply-side constraints that have contributed to slowing growth, new inflationary pressures, rising costs of capital, fiscal distress and declining fiscal space and challenges in meeting sustainability goals,” Spence says. “And the reason is that AI has the potential not only to reverse the downward productivity trend, but over time, to produce a major sustained surge in productivity.”

What are Artificial Intelligence Use Cases?

Here are some of the many AI use cases in different industries, according to IBM:

  • Automotive: AI predicts and adjusts production, makes workflows more efficient and powers robots to build vehicles and gauge their quality. Similarly, self-driving cars use AI-powered computer vision to interact with the world around them.
  • Education: AI provides more personalized instruction, including language translation and transcription, and it gauges student performance and looks for plagiarism.
  • Energy: AI forecasts demand, looks for ways to conserve energy, manages the grid and interacts with customers.
  • Financial services: AI forecasts trends, trades stocks, detects fraud and helps decide on granting loans. Similarly, in the insurance industry, AI helps process routine claims and appraisals and calculates payments.
  • Health care: AI diagnoses cancer from radiology imaging, diagnoses and creates treatment plants for diseases, automates customer interaction and analyzes genetic information. Similarly, in the pharmaceutical industry, AI analyzes data more quickly and correctly.
  • Manufacturing: AI predicts trends, creates multiple design options, improves production and predicts equipment failures.
  • Retail: AI predicts customer demands, recommends and advertises products, communicates with existing and potential customers and manages inventory.
  • Transportation: AI powers traffic applications, such as Google Maps, Uber and Lyft.

Across all industries, AI is expected to begin playing a major role in a number of business roles, according to a report by High Peak Software. These include:

  • Accounting: Citing KPMG, 54% of companies are integrating AI tools for financial task automation.
  • Customer service: Citing HubSpot, 79% of professionals recognize AI’s capabilities in automating responses and enhancing service quality, because AI chatbots and other applications have the potential for more personalized and responsive interactions.
  • Market research analysts: Citing Qualtrics, 97% of market research analysts believe AI may replace their roles within the next decade.
  • Salespeople: Citing Bain, 64% of sales executives reportedly foresee an uptick in using automated processes during the sales cycle, including data analysis, lead scoring and personalized marketing.
  • Virtual assistants: Citing Technavio, a 11.79% compound annual growth rate in the virtual assistant market between 2020 and 2025

See more: 5 AI Case Studies

Which Industries are Adopting Artificial Intelligence?

These sectors are leading in AI adoption rates, according to a North Carolina Department of Commerce’s report on the U.S. Census Bureau's Business Trends and Outlook Survey:

  • Information: 18%
  • Professional, scientific and technical services: 12%
  • Educational services: 9%
  • Real estate, rental and leasing: 8%
  • Management: 8%

Artificial Intelligence Companies

A number of major tech companies supply the underlying technology AI runs on, whether hardware or software:

  • Amazon: Uses AI in its Alexa line and product recommendation tool; released Amazon Q GenAI assistant; offers lines of cloud-based AI products
  • Apple: Rolling out Apple Intelligence, a suite of text and graphics AI tools for its hardware devices
  • Google: Acquired DeepMind; developed Gemini chatbot and line; offers lines of cloud-based AI products
  • IBM: Released Watson line, which has been used for purposes ranging from health care to playing games
  • Meta: Shared open-source foundation model Llama; created a generative AI product, Meta AI; uses AI to generate and place Facebook ads; Facebook itself serves as a giant data source for AI applications
  • Microsoft: Released Copilot line; partnered with OpenAI early on and holds a large equity stake in OpenAI; also collaborates with NVIDIA; offers lines of cloud-based AI products
  • NVIDIA: Makes graphics processing unit (GPU) chips, which are heavily used in AI because of their complex computation processing capability; also offers AI software
  • OpenAI: The release of its generative AI chatbot ChatGPT and foundation model GPT led to the most recent groundswell of interest in AI
Learning Opportunities

See more: 10 Top Publicly Traded AI Companies

About the Author
Sharon Fisher

Sharon Fisher has written for magazines, newspapers and websites throughout the computer and business industry for more than 40 years and is also the author of "Riding the Internet Highway" as well as chapters in several other books. She holds a bachelor’s degree in computer science from Rensselaer Polytechnic Institute and a master’s degree in public administration from Boise State University. She has been a digital nomad since 2020 and lived in 18 countries so far. Connect with Sharon Fisher:

Main image: By Sigmund.
Featured Research