Something shifted in the last few months. Over the holidays, I built a fully functional web app with enterprise-grade architecture, third-party payment processing and subscription management. I'm a program manager, not an engineer. But with agentic coding tools and clear requirements, I had a working prototype in days.
When I spoke with other people in my network, I realized I wasn't alone. Friends at other companies had similar stories. Directors who'd shipped demos. Product managers who'd prototyped features. Architects who'd scaffolded systems in an afternoon. Everyone was excited to share what they learned.
But a pattern kept coming up in these conversations: in many organizations, the individual contributor engineers weren't part of this wave. Some were leading the charge, but a surprising number weren't experimenting, weren't curious, weren't engaging. Engineering managers described a growing rift on their teams. The people with the deepest technical expertise were, in many cases, the most resistant to trying these tools. Meanwhile, everyone else was using them to get work done.
Table of Contents
- The Divide Is Real, and It's Accelerating
- Why Engineers Are Resistant, and Why It Makes Sense
- The Cost of Waiting Is Career Risk
- Two Things Every Engineer Needs to Experience
- The Leadership Challenge: This Is Emotional Work
- The Window Is Closing
The Divide Is Real, and It's Accelerating
Late 2025 wasn't a single product launch. It was a convergence. Over about 25 days between November and December, every major AI lab released their flagship reasoning models.
The tooling shifted from autocomplete assistants to agentic IDEs and CLI tools. Agents that manage the terminal and browser to test their own work. Agents that live in your terminal and execute multi-step tasks without needing a user interface. Enterprise dashboards that give managers visibility into coding agents working across teams.
This happened alongside a rapid-fire release of reasoning models from Google, Anthropic and OpenAI, each optimized for agentic coding and multi-step problem solving. The models stopped just predicting the next token and started reasoning through problems. And now in early 2026, we're seeing even more entrants: desktop tools for non-developers, browser agents, specialized coding assistants for different workflows.
The barrier to agentic development dropped to near zero almost overnight. Anyone with basic technical literacy and clear requirements could ship working software at a pace that seemed unrealistic a year ago. These tools handle complex workflows: multi-file refactors, database migrations, API integrations, deployment pipelines. Research and diagramming that used to take a week can be roughed out in an afternoon.
For managers and directors who've experienced this, the implications are clear. For engineers who haven't tried it, the claims sound exaggerated. Or threatening.
That's where the friction starts.
Why Engineers Are Resistant, and Why It Makes Sense
Before talking about solutions, we should acknowledge that resistance isn't irrational.
Engineers have spent years, sometimes decades, building expertise. They've developed intuition about systems, earned trust through experience, learned to spot problems before they become incidents. Now they're hearing that a tool can do in minutes what used to take them days.
The fear isn't really "AI will replace me." It's more specific:
- "If AI can do this, what's my value?" When implementation work gets faster, it's natural to wonder where you fit.
- "This feels like cheating." There's a craftsmanship identity tied to writing code by hand. Using AI can feel like a shortcut.
- "I don't trust it." Engineers have seen enough hype cycles to be skeptical. They've also seen enough AI-generated code to know it can be confidently wrong.
- "If I learn this and it makes me faster, will expectations just ratchet up?" The productivity treadmill concern is real.
These aren't excuses. They're concerns that deserve direct conversation.
The Cost of Waiting Is Career Risk
Here's the harder part to say: engineers who dig in on this are putting themselves at risk. Not because AI will replace them, but because they're choosing to stop learning while the field moves forward. In a field that rewards continuous learning, that's a risky position.
The engineers who thrive in the next phase won't be the ones who wrote the most lines of code. They'll be the ones who learned to use AI as a force multiplier for their existing expertise. They'll ship more, architect better and spend less time on tedious work.
The goal here isn't forcing adoption. It's helping engineers see that their skills become more valuable when amplified by these tools, not less.
Two Things Every Engineer Needs to Experience
If you're leading engineering teams through this transition, here's what I've seen work. Engineers need to experience two specific realizations, and both require hands-on exposure. Presentations and mandates won't get you there.
1. They need to feel mundane work vanish
There's a moment that happens the first time you use an agentic coding tool on a real task. You describe what you need, and within seconds you're looking at working code. Not a suggestion or a snippet. A working implementation.
When this happens, something clicks. You start seeing all the tedious, repetitive work in your day differently. Boilerplate that used to take an hour is gone. Test scaffolding is generated. Documentation updates are handled.
Engineers who haven't experienced this can't really imagine it. They're operating from a mental model where "coding faster" means marginally better autocomplete. They need to see that this is a different category.
The action: Don't tell them about it. Sit with them. Pick a real task, something tedious but necessary, and work through it together with an agentic tool. Let them feel the difference.
2. They need to see how much they bring to the AI
Here's the counterintuitive part: using AI effectively makes you realize how much your experience matters.
When a junior developer prompts an AI, they get code that runs. When a senior engineer prompts the same AI with clear requirements, architectural constraints, edge case awareness and domain context, they get code that handles errors gracefully, follows the patterns already established in the codebase, and doesn't need to be rewritten in two weeks.
The difference isn't the tool. It's the operator.
Your experience shapes every part of the interaction. The requirements you set, the questions you ask, the structure you establish, the problems you anticipate. An AI coding agent is a powerful amplifier, but it amplifies your judgment. Rooky input, rooky output. Expertise in, exceptional output.
Engineers need to discover this for themselves. They need to see that the AI isn't replacing their judgment. It's executing their judgment faster.
The action: Have experienced engineers run the same task with AI that a less experienced team member attempted. Walk through why the results differ. Make explicit how expertise translates into better prompts, better constraints, better outcomes.
The Leadership Challenge: This Is Emotional Work
I don't envy engineering leaders right now. These rifts are appearing fast, and they're personal. You can't mandate your way through fear. You can't memo your way to adoption.
This requires empathetic, one-on-one work. And it requires the right people doing it.
Appoint the right advocates. The engineers who've embraced AI at work are your best resource, but only if they can meet resistant colleagues with patience rather than judgment. Enthusiasm that comes across as condescension will backfire. You need people who remember their own skepticism and can speak to it honestly.
Create psychological safety for experimentation. Engineers won't try something new if they think failure will be visible or punished. Make it clear that learning curves are expected, early attempts will be messy and the goal is exploration. This is where AI governance frameworks can help by establishing clear guardrails that encourage experimentation within defined boundaries.
Address the workload concern directly. If engineers fear that AI adoption just means more will be expected of them for the same pay, say so out loud. Either commit that productivity gains will improve work-life balance, or be honest about what expectations will look like. Uncertainty breeds resistance.
Make adoption about career growth, not compliance. Frame this as skill-building, not mandate-following. The engineers who develop fluency with these tools are positioning themselves for the next decade. The ones who don't are making a choice, and they should make it with clear eyes. Consider offering learning and development certifications that help engineers build structured expertise in AI-assisted development.
The Window Is Closing
A year ago, AI coding tools were interesting experiments. Six months ago, they were useful for certain tasks. Today, they’re production-ready and change what’s possible for a single engineer or a small team.
The organizations that navigate this transition well will move faster, ship more and attract engineers who want to work at the frontier. The ones that leave rifts unaddressed will find themselves with divided teams, uneven adoption and a growing gap between what’s possible and what they’re delivering.
This is a people problem. And people problems require leadership. Not mandates, not memos, not mandatory training.
It requires sitting down with the engineers who are skeptical, hearing their concerns and helping them experience something that shifts their mental model.
The tools are here. The question is whether your teams will adopt them together or fracture in the process.
Editor's Note: What else are businesses doing to improve AI adoption?
- Generative AI Adoption: Top-Down or Bottom-Up? — Grassroots experimentation and executive ambition are converging as organizations work to turn scattered generative AI use into strategic business impact.
- EZCater's Mark Christianson on Building an AI Mindset to Drive Adoption — EZCater's senior manager, digital workplace and AI strategy discusses their efforts to encourage AI adoption, with the goal of making every team an R&D hub.
- Round Pegs and Square Holes: Why AI Adoption Requires a Focus on Culture — AI’s impact isn't inherent in the technology itself but in how it is deployed. Will it be a means to cut corners, or a catalyst for growth and innovation?
Learn how you can join our contributor community.