A gnome statue stands in a garden.
Editorial

Debunking 5 Common GenAI Myths

4 minute read
Randall Hunt avatar
By
SAVED
Separate fact from fiction in GenAI.
The buzz around GenAI has reached a fever pitch in recent months, with the line between hype and reality becoming all the more blurry as companies race to be able to say that they too have implemented the technology. There are countless theories on how the technology will change the future of business — and life — as we know it. But before we begin prepping for Skynet, it’s time to debunk some of the common myths surrounding GenAI and bring a more realistic understanding of what this technology means for us today, tomorrow and looking into the future.

Myth 1: GenAI Was Built to be an Assistant

With a lot of speculation around if, when, how and what aspects of jobs GenAI is going to take over, it’s important to understand how this technology actually works and how it came to exist. GPT stands for generative pretrained transformer, and it's the architecture used by most large language models (LLMs). You tokenize, meaning you chunk up and encode words or concepts into numbers, huge swaths of text, and you train a model by showing it sequences of tokens and asking it to predict the next token or fill in missing tokens. If you do this millions of times, the model begins to generalize common rules of language, information and other concepts.

LLMs are trained on trillions of “tokens” coming from Wikipedia, Common Crawl, Stack Overflow and even teenage angst on LiveJournal. We’ve had to trick the technology into behaving as helpful assistants, when in fact, the LLM’s only job is to predict the next token. In reality, at this stage, it’s just an extremely advanced form of autocomplete. Not that autocomplete can't be useful.

Myth 2: GenAI Can Make Me an Expert in Anything

Interestingly, if a user asks a GenAI model to answer a query as an expert on a certain topic, the results could actually be worse than if the user omits that instruction. Why? The set of expert data the model was trained on is close to zero, because the internet simply doesn’t have this information. By instructing a model to behave as an expert in a topic it is unfamiliar with, it has less foundational information to draw upon and can often begin to hallucinate details that aren't there.

Furthermore, current models lack the ability to recognize an error in their reasoning and go back and trace another path. There are a number of ways to mitigate this behavior, but sometimes even those techniques fail, and a model is convinced that two plus two equals five. Humans can go back and say, “I screwed up. Let me fix that.” GenAI will often double down on being wrong and continue to go down branches of its predictive model that are just wrong.

See more: 5 Generative AI Trends in the Digital Workplace

Myth 3: There Aren’t Going to be Any Coders in Five Years

This is absolutely incorrect. Coding is actually going to become more accessible in the short term. Thanks to GenAI, it’s going to be easier to learn to code, because of the proliferation of and access to AI code generators that will help novice programmers put together different building blocks of code. As a result, there will actually be more coders in the short term. However, GenAI will change the way we code. If it is increasing the efficiency of a developer’s work, then organizations can potentially hire more developers. For a product company, this means the dollar per feature goes down, impacting the bottom line. Overall, developers will become more efficient but aren’t going away.

Myth 4: GenAI is One Step Away From Skynet

For those unfamiliar with “Terminator,” Skynet is the fictional evil artificial general superintelligence system in the movie franchise. In real life, it’s important to make the distinction that GenAI is not sentient and is not artificial general intelligence (AGI). In fact, AI or machine learning (ML) has been operating in the background of our daily lives for quite some time now. Every time Amex flags a fraudulent transaction or Netflix makes a great recommendation, that is AI at work. We've even been doing simpler versions of modern generative AI in the past with techniques like RNNs and Markov Chains. Transformers provided a way to track attention and relationships across longer sequences, which allowed for much more powerful content generation. Only now that this research is more public facing and working in a more conversant way do people think that we’re just a step away from an all-powerful sentient AI that is going to take all of our jobs. To these folks I simply say: “Ok, Doomer.”

However, there is a huge difference between ChatGPT’s natural language capabilities and a true artificial general intelligence. ChatGPT can’t even spell at this point; we are a long way off from bringing sentience to Gen AI.

Myth 5: GenAI is Too New of a Technology to be Useful to Enterprises

Understanding the best ways to derive business value from GenAI may be in its nascent stages, but that doesn’t mean the technology can’t be useful to enterprises today. For example, we’ve already seen use cases deployed with success in financial services, education, contact centers, logistics and human resources settings.

What is true is that the technology is rapidly evolving, and the best practices of today will evolve as additional tools develop. It is important that companies view the technology as a loop consisting of the following steps: experiment, build, deploy, optimize and repeat. The best part of working with hyperscalers is that they’re taking care of large portions of that loop for you. They’re continuously improving the underlying services, which result in better price and performance for your GenAI workloads. These toolchains will continuously evolve and improve over the next several years.

The potential of GenAI in our personal and professional lives is certainly exciting, but it’s helpful to have a realistic understanding of both its capabilities and shortcomings at the present moment and separate hype from reality. While ChatGPT certainly thrust GenAI into the spotlight over the past year, its applications in business are best understood from a long-term and strategic perspective, on an organization-by-organization basis.

See more: 5 Generative AI Issues in the Digital Workplace

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Randall Hunt

Randall Hunt is the VP of cloud strategy and innovation at Caylent, an AWS cloud services company based in Irvine, California. Hunt is a technology leader, investor and hands-on coder. Previously, he led software and developer relations teams at Facebook, SpaceX, AWS, MongoDB and NASA. He spends most of his time listening to customers, building demos, writing blog posts and mentoring junior engineers. Python and C++ are his favorite programming languages, but he begrudgingly admits that JavaScript rules the world. Connect with Randall Hunt:

Main image: By Gabriel Vasiliu.
Featured Research