road closed, diversion sign on road in the UK. pedestrians and bicyclists in the background
Feature

Technology’s Limits Could Be a Barrier to AI’s Advancement

3 minute read
Mark Feffer avatar
By
SAVED
GenAI has run out of data and is now making its own, prompting experts to warn of the fantasy world we’re entering.

The development of large language models may be approaching a natural barrier to continued advancement. According to a report on Business Insider, OpenAI’s upcoming model, dubbed Orion, shows more modest improvements than earlier versions, not attaining the same level of upgrade users experienced moving from GPT-3 to GPT-4.

OpenAI isn’t alone facing this challenge. The report shows growing evidence of a “performance wall” beginning to impact a number of AI models, which, in turn, implies that their growth may be plateauing, despite the continued hype around the technology.

In response, AI developers are looking for new ways to train and operate their models. 

New Thinking on New Thinking

Some AI experts believe the time’s come to move away from the traditional scaling approach — building bigger models with more data and computing power. 

Instead, some developers are looking at the value add from having their models think in more human-like ways, that is: consider a number of possibilities in response to a query and, rather than immediately pick an answer, compile and evaluate various possibilities, then choose the best answer. This also allows models to allocate their processing power to where it’s needed, such as math or coding or advanced operations that require more complex reasoning.

“It turned out that having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer,” Noam Brown, a researcher at OpenAI, said recently at a TED AI conference in San Francisco.

One issue that is pressing this shift is data, or the lack of it. Developers have pretty much run through all of the online data that’s been used to train their models, and industry researcher Epoch AI estimates the supply of usable textual data could be exhausted by 2028

As a limited workaround, developers are using “synthetic” data that’s been generated by AI itself. That doesn’t work as well as using “factual” data, experts say. For one thing, synthetic data can adversely impact a model’s quality and reliability, said the data-science website Dataconomy, prompting companies like OpenAI to look at implementing ways to validate data.

Related Article: Why Clean Data Is Foundational for Effective AI

A Fantasy World

AI analyst Gary Marcus believes the idea that LLMs will be able to continue scaling into artificial general intelligence is a fantasy. A big part of that reasoning is based in economics. Training is expensive to begin with, he said, and becomes more expensive as the technology scales. Marcus believes that eventually, LLMs will become a commodity, tanking valuations in the space, where many business plans are based on a “false premise.”

In a separate post, Marcus argued that we have “reached a point of diminishing returns for pure scaling. Just adding data and compute and training longer worked miracles for a while, but those days may well be over.”

From a business point of view, investors and industry leaders have been working on the assumption that bigger LLMs will lead to more advanced AI. But Marcus believes the expenses involved — AI chips, training and electricity-hungry data centers — will continue to rise until the cost of training more complex models becomes higher than businesses and investors will be able to stomach.

As Inc. pointed out: “Chatbots and image generators have captured the zeitgeist but haven’t yielded anything close to a revenue path commensurate with the industry’s titanic investment.”

Related Article: Maximize AI on a Budget: A Practical Guide for Decision Makers

Is the GenAI Bubble Bursting?

There are signs that AI hasn’t been keeping up with its own hype. 

Some 79% of technology executives surveyed by CNBC said their companies have tried Microsoft 365 Copilot, and half said they weren’t sure it was worth its $30 per user monthly fee. While 50% said they’ll deploy the product to all of their employees, a third are still in the testing phase and 17% decided not to fully adopt it. 

Learning Opportunities

Meanwhile, 70% of the senior executives surveyed by Deloitte have moved less than a third of their generative AI experiments (30%) into full use. And although the company’s third-quarter State of Generative AI in the Enterprise report found their interest in AI is “high” or “very high,” that’s down 11% since the first quarter.

While all of this indicates AI development may become a little less frenetic in 2025, make no mistake: it will still move forward. The idea of scaling hasn’t hit a full stop, it’s simply slowing down. While training remains expensive, large language models are already available through a number of channels. An open source community is growing. And, enterprises are developing their own models to fit their businesses. 

When the dot-com bubble burst in 2000, investments were lost, companies failed and wide swaths of workers were laid off. Today, the internet and world wide web permeate every aspect of our lives. Companies like Amazon, eBay, Skype and Tripadvisor navigated the bubble’s crippling losses and prospered. So while investment may cool off and development may slow, AI itself isn’t going anywhere.  

About the Author
Mark Feffer

Mark Feffer is the editor of WorkforceAI and an award winning HR journalist. He has been writing about Human Resources and technology since 2011 for outlets including TechTarget, HR Magazine, SHRM, Dice Insights, TLNT.com and TalentCulture, as well as Dow Jones, Bloomberg and Staffing Industry Analysts. He likes schnauzers, sailing and Kentucky-distilled beverages. Connect with Mark Feffer:

Main image: Ben Wicks | unsplash
Featured Research