"High Score" message on an arcade game
Feature

Tokenmaxxing Is Taking Over Workplaces — And It's Measuring All the Wrong Things

7 minute read
Scott Clark avatar
By
SAVED
Some companies now track employee AI usage as a productivity metric. The practice, called tokenmaxxing, is raising new concerns.

As AI adoption goes up, a new and unexpected behavior is coming to light: employees being evaluated not just on their output, but on how much they use AI systems to produce it. 

Inside some organizations, internal dashboards now track token usage, leaderboards reward heavy users and management encourages employees to maximize their interactions with AI tools

This trend, now referred to as “tokenmaxxing,” raises important questions about how businesses measure productivity in an AI-driven workplace.

Table of Contents

What Is Tokenmaxxing? The New Workplace Metric No One Asked For

Tokenmaxxing is a practice where employees are evaluated based on how much they use AI systems. Instead of focusing only on outcomes like completed tasks or business impact, some companies track AI interaction as a productivity signal.

At the center of this trend are "tokens," units of text processed by large language models (LLMs). Each prompt, response or workflow consumes tokens that can be counted per user. This data creates insight into employees' frequency and intensity of AI tool use.

Now, some workplaces track prompt counts, token consumption and usage frequency. Some also rank employees by AI interaction — identifying power users — making token usage a proxy for productivity.  

Example of a token tracking dashboard
Simpler Media Group

Related Article: Anthropic’s Claude Opus 4.6 Hits 1M Tokens — But Bigger Context Comes at a Cost

Under Pressure to Show ROI, Companies Turn to Numbers They Can See

Tracking token usage is less about deliberate strategy and more about a gap in how enterprises measure AI-driven work.

As businesses invest heavily in generative AI tools, they face pressure to demonstrate strong adoption and return on investment. But, unlike traditional software, AI performance is harder to quantify. While confidence in GenAI remains high, businesses are increasingly uncertain about their ability to translate that investment into measurable outcomes, according to recent research from Mindbreeze. 

Usage data — token consumption, prompt frequency, session activity — fills that gap, giving visible, quantifiable signals of AI tool use. Without established performance benchmarks, usage acts as a convenient proxy for adoption. 

"Corporate leaders are beginning to feel pressure to show ROI of the procurement of AI tools, and they are sending different types of signals to the market that they have an AI strategy or that they are leveraging AI to improve productivity (including using AI as a reason for layoffs)," said Paul Gomez, AI expert at FTI Consulting

One of these signals, he added, is token usage for individual employees or even the entire company. "McKinsey publicly announced that it reached 100B tokens in 2025. That's not a productivity number. That's a marketing number."

OpenAI award to McKinsey & Co for passing 100 billion token usage

Companies Have Fallen for Vanity Metrics Before

This dynamic is not new. 

Early SaaS platforms were often evaluated based on logins, active users and time spent in the application, long before businesses developed more meaningful ways to measure impact. Token-based metrics follow a similar pattern, offering an immediate but incomplete view of how systems are used.

There is also the financial consideration. Many AI platforms are priced based on token usage, which ties consumption directly to cost. Tracking tokens becomes a way to both monitor spend and justify it. High usage could become evidence that the investment is delivering value — though, in reality, it could reflect inefficient prompting, redundant workflows or poorly defined tasks. 

Ultimately, token usage is tracked because it's measurable, visible and easy to report. But when usage becomes the metric, usage becomes the goal — regardless of outcomes. 

The Incentive Problem: Looking Busy vs. Being Productive 

Tokenmaxxing shifts the focus from productivity to activity. But more AI use does not necessarily mean better work or more value.

Activity vs. Outcome-Based Measurement in AI Workflows

The difference between measuring AI usage and measuring the actual value generated from that usage:

Measurement TypeWhat Is TrackedWhat It SignalsPotential Risk
Token Usage (Activity)Prompt count, tokens consumed, session frequencyLevel of AI interactionInflated usage without improved outcomes
Output QualityAccuracy, clarity, relevanceEffectiveness of AI-assisted workRequires subjective or domain-specific evaluation
EfficiencyTime to complete tasksProductivity improvementMay overlook quality tradeoffs
Business ImpactRevenue, engagement, cost savingsReal enterprise valueHarder to attribute directly to AI

Visible token consumption influences behavior, even in subtle ways.

"Where tokenmaxxing gets genuinely dangerous is when organizations start using it as a proxy for employee engagement with AI," said Brian Fending, managing director at Ordovera Advisory.

For instance, an employee might generate multiple drafts of the same content when one would suffice. Another might run additional prompts that add marginal value but increases measurable usage. It's the same way developers learned to game lines-of-code metrics decades ago, said Fending. 

Tokenmaxxing, in this sense, is the misalignment between what is easy to measure and what actually matters.

The Case for Tokenmaxxing (Within Reason) 

Tokenmaxxing is not always misguided. In early AI adoption, increased use often signals experimentation and learning. Users test prompts and refine workflows, with higher token usage correlating to skill development and future productivity.

High token usage can also reflect iterative processes needed for reliable outputs — something that may look inefficient in isolation, but reduces future effort and cost.

"Scaling compute along with data can increase the size, complexity and therefore value of the result," said Brian Verkley, director of AI data strategy at VAST Data. "A token represents generated business value. Therefore, I expect business value per token to continue to rise, along with AI usage."

In some workflows, extensive AI interaction is necessary. Think complex tasks like research, content creation, multi-step problem-solving, all of which often require iterative prompting and refinement. In this case, additional usage equates to high-quality outputs. 

But these benefits all depend on context. Without clear connection to outcomes, high token usage can signal productive engagement, or just as easily reflect inefficiency or misaligned incentives. 

The Bill Keeps Growing, But the Results Don't 

The bigger risk with tokenmaxxing is not the behavior itself, but the resulting impact on how work is measured. When tokenmaxxing becomes the metric for productivity, businesses begin to optimize for what's easiest to track rather than what actually delivers value. 

Learning Opportunities

Token Usage Signals vs. Actual Outcomes

This chart illustrates how high AI usage can create misleading signals about productivity and adoption.

Observed MetricWhat It SuggestsWhat May Actually Be Happening
High token consumptionStrong AI adoptionExcessive or inefficient prompting
Frequent prompt activityHigh productivityRedundant or unnecessary iterations
Leaderboard rankingsTop performersEmployees optimizing for visibility, not outcomes
Rising AI spendIncreased value from AICost inflation without measurable impact

One immediate consequence is cost inflation, since many AI platforms price by usage. Incentivizing more prompting may cause rising expenses without improved outputs. 

Token usage costs

Increased AI usage may result in outputs that are structurally sound but interchangeable with lower-usage outputs, offering little original insight despite high activity. Ultimately, this distorts performance measurement — low AI users with strong results seem less engaged, while high users appear more productive.

The financial side of tokenmaxxing becomes clearer when businesses look beyond raw activity and ask whether the added compute is actually replacing meaningful human effort. 

At his company, said Ganesh Kompella, founder and CTO at Kompella Technologies, they had a situation where an agentic workflow was burning through tokens at a rate that made him go and look at the bill. "Turned out the agent was running in a loop, basically regenerating slightly different versions of the same analysis because the prompt wasn't constraining the output well enough. Nobody noticed because the outputs looked good. That's the core problem with treating token volume as a signal of anything." 

At the individual level, tokenmaxxing can cause performative usage, with employees feeling pressured to show AI engagement through unnecessary interactions. The result is wasted time, resources and potential cognitive fatigue. 

"Tokenmaxxing is a flawed strategy, because the use of AI tokens in no way correlates to productivity," said Matthew Crook, general manager at PeopleHR Evo. "It is creating a troubling dependency on AI for workplace responsibilities that shouldn’t need the help of AI models."

The Metrics That Actually Tell You If Your AI Is Worth the Spend

If token usage is an incomplete measure of AI adoption and ROI, the question becomes: what should organizations measure instead?

A more mature approach is to treat token use as a cost input rather than a success metric. What matters more is if AI completed a task, influenced a decision or improved a measurable business result. 

Instead of tracking token usage, organizations should focus on outcomes-based AI metrics

  • Productivity:
    • Are tasks getting done faster?
    • Are people (employees and customers) happy using the AI tool?
  • Quality:
    • Does the output match or exceed human results for the same task? 
    • Does the coding output do what you want it to?
    • Do customers consider their questions well handled? 
    • How often do customers leave an AI chatbot looking for human assistance?
    • Has content engagement gone up?
  • Desired Outcomes:
    • What new challenges arise due to AI use?
    • How often do humans have to step in to fix problems?
    • How often is bias or hallucinations present in outputs?
  • Business Impact:
    • Have sales grown?
    • Is customer retention higher?
    • Have upselling rates increased?
    • Are net promoter scores (NPS) higher?
    • Has inventory turnover changed?
    • Have operating costs decreased?

Together, these metrics provide a fuller picture of AI-driven work, focusing on effectiveness rather than AI activity alone. 

Related Article: Reimagining Traditional Workflows With AI Agents

A Growing Pain, Not a Permanent Model

Tokenmaxxing is most likely a transitional behavior, not a permanent model. With enterprises determining how to integrate AI into daily workflows, they're still experimenting with how to measure productivity in a context where traditional metrics no longer apply. 

Over time, these measurement approaches will likely evolve, moving away from how often systems are used to how effectively they improve performance

Tokenmaxxing shows technology advancement outpacing evaluation frameworks. Enterprises aren't just adopting new technologies; they're being forced to rethink how work itself is measured, valued and rewarded — forced to redefine productivity in a world where AI increasingly shares workloads. 

About the Author
Scott Clark

Scott Clark is a seasoned journalist based in Columbus, Ohio, who has made a name for himself covering the ever-evolving landscape of customer experience, marketing and technology. He has over 20 years of experience covering Information Technology and 27 years as a web developer. His coverage ranges across customer experience, AI, social media marketing, voice of customer, diversity & inclusion and more. Scott is a strong advocate for customer experience and corporate responsibility, bringing together statistics, facts, and insights from leading thought leaders to provide informative and thought-provoking articles. Connect with Scott Clark:

Main image: Simpler Media Group
Featured Research