A broken alarm clock
Feature

Do AI Coding Tools Really Increase Developer Productivity? Studies Say No

3 minute read
Nathan Eddy avatar
By
SAVED
Despite the hype, AI coding tools may be slowing down your dev team. Explore the research findings and real-world challenges.

Despite promises that AI will help boost productivity, AI tools are actually slowing down experienced software developers, according to a Cornell University study published last month.

Conducted between February 2025 and June 2025, researchers examined 246 coding tasks performed by 16 experienced open-source contributors, each with an average of five years working on the projects in question. Tasks were randomly assigned to allow or disallow the use of AI tools such as Cursor Pro and Claude 3.5/3.7 Sonnet.

Prior to the study, researchers forecasted a 24% time reduction — a number that dropped to 20% after experiencing a slowdown. Economics and machine learning experts anticipated time savings of 39% and 38% respectively.

Instead, the results found that developers completed tasks 19% slower when using AI compared to working unaided.

Why Does AI Slow Developers Down? 

Researchers explored 20 potential factors — including project size, code quality requirements and developers’ prior AI experience — that might explain the slowdown. While they noted experimental artifacts could not be completely ruled out, consistent effects across analyses makes it unlikely that the finding is simply the result of the trial design.

Another recent study by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) uncovered similar challenges. Issues they uncovered include: 

  • AI coding systems do not invoke tools. Researchers claim AI needs to be able to select which tool to use, decide how to use it and interpret the outputs. 
  • Human-AI collaboration is not seamless. Human specifications are often too vague, there is little controllability over coding with LLMs and human-AI collaboration interfaces are still limited. 
  • Large projects require long-term planning. LLMs struggle with designing good, lasting abstractions and respecting modularity and code quality principles. 
  • Code writing requires semantic understanding of the codebase. LLMs struggle with seeing how various parts of code go together, knowing what is implemented where, understanding how algorithms work and keeping track of program invariants at certain program points 

Security is another major factor. AI-generated code may overlook important security rules, leading to time-consuming fixes. Compatibility issues can also arise when AI-produced code does not work well with a company’s existing libraries and tools, causing further delays.

Related Article: Vibe Coding Explained: Use Cases, Risks and Developer Guidance

Where AI Tools Struggle in Real-World Coding  

Vivek Mishra, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), said AI coding tools are helpful and can quickly suggest solutions. But in real-world industry coding, they can’t be relied on completely.

“While they assist in writing code, they might not understand a company’s specific classes, bugs, coding standards or business needs,” he said. “Over-relying on AI tools can sometimes slow things down due to issues with library compatibility or security concerns.”

Protecting personal data is also a big worry, he added, especially in fintech companies, where sending sensitive information to AI tools is risky. “To get code that meets industry standards, security and compatibility, you often need to ask multiple questions, which can make coding slower." 

Stevan Le Meur, principal product manager, developer tools, at Red Hat, pointed out that many experienced developers maintain and evolve legacy production systems, which are more intricate and complex than fresh codebases. Integrating AI into these environments often creates more churn than benefit at first, and because these systems are business critical, developers approach AI-generated outputs with heightened caution, reviewing and testing extensively before committing to the changes.

“Furthermore," said Le Meur, "mature code bases have style and culture embedded in them. This can lead to excessive tweaking of AI-generated code.”

Related Article: 10 Top AI Coding Assistants

Where AI Coding Tools Actually Shine 

AI coding tools still excel in certain areas, according to Le Meur, particularly when tackling simple or repetitive tasks. “They shine in scripting, code completion, writing straightforward functions, generating test cases, handling regexes and breaking down long and complex error messages." In these scenarios, AI delivers immediate and tangible time savings, creating a strong sense of increased productivity.

“Combined with the hype, marketing and compelling demos circulating in the industry, it’s easy to walk away with the impression that these tools can supercharge all aspects of development,” said Le Meur.

Learning Opportunities

He recommended developers start small and focus on using AI tools where they deliver the most value, such as in test-driven development, generating simple functions, assisting with PR reviews or automating repetitive tasks.

"Avoid the temptation to switch tools too frequently, even in a fast-moving landscape where new models and features appear almost daily." Constantly changing platforms or models forces developers to relearn interaction patterns, which can slow progress. Instead, treat tool evaluation like software testing: create repeatable test scenarios, run them consistently and gather measurable data on performance. “This structured approach bases decisions on evidence, not hype, and [helps] your AI integration remain stable and productive over time,” said Le Meur. “Don’t treat AI as just a tool; it’s a new kind of teammate.”

To avoid common pitfalls when integrating AI into development workflows, teams must start with mastering prompt engineering, he noted. “Crafting the right prompt is a skill: Too little detail, and the AI flounders; too much, and it may hallucinate or overwhelm. Balancing clarity with restraint is key.”

About the Author
Nathan Eddy

Nathan is a journalist and documentary filmmaker with over 20 years of experience covering business technology topics such as digital marketing, IT employment trends, and data management innovations. His articles have been featured in CIO magazine, InformationWeek, HealthTech, and numerous other renowned publications. Outside of journalism, Nathan is known for his architectural documentaries and advocacy for urban policy issues. Currently residing in Berlin, he continues to work on upcoming films while contemplating a move to Rome to escape the harsh northern winters and immerse himself in the world's finest art. Connect with Nathan Eddy:

Main image: Mono on Adobe Stock, Generated With AI
Featured Research