The biggest barrier to scaling AI inside enterprises is no longer technical maturity, but workplace culture, according to a global report from Infosys and MIT Technology Review Insights.
According to the study, which drew on responses from business leaders across industries and regions, 83% of business leaders say psychological safety has a measurable impact on the success of AI initiatives.
Fear — particularly fear of failure — continues to slow AI adoption, even as investments in AI tools and platforms accelerate.
Table of Contents
- Fear, Not Technology, Is What Stalls AI at Scale
- The Organizational Shift AI Requires
- The Case for Deliberate AI Experimentation
- How Psychological Safety Translates Into Business Results
- 3 Ways Leaders Can Build Psychological Safety for AI
- Embrace the Fast-Fail Mindset
Fear, Not Technology, Is What Stalls AI at Scale
According to the findings, employees who worry about criticism or negative consequences often hesitate to:
- Experiment
- Question AI-driven outputs
- Lead AI-related projects
That hesitation, in turn, limits innovation and keeps many initiatives stuck in pilot phases.
Despite rapid advances in AI capabilities, human factors are holding organizations back; adoption falters when employees lack confidence in how the technology should be used or fear making mistakes.
Related Article: The Full Journey of AI-Powered Transformation: From Idea to Impact
The Organizational Shift AI Requires
The report frames AI adoption as a change management challenge as much as a technology deployment, arguing that scaling AI requires building resilience and trust across the workforce.
Ashish Nadkarni, group vice president and general manager with IDC's worldwide infrastructure research organization, sees the issue as a practical response to risk in a high-hype environment. Because AI sits inside an unusually intense cycle of expectations, board pressure and peer comparison, it amplifies leaders’ fear of being blamed if initiatives fail to deliver.
In that context, psychological safety shows up as protection from downside risk. Leaders want room to experiment with AI without being penalized for pilots that don’t pan out or for choosing not to move as fast as competitors.
The Case for Deliberate AI Experimentation
Many executives feel pulled in opposite directions, said Nadkarni. They want credit for being seen as innovators, but they also want insulation if results fall short or the technology underperforms. When leaders fear criticism or reputational damage, they may overpromise, rush adoption or quietly avoid hard decisions.
By contrast, environments that acknowledge uncertainty and normalize experimentation allow AI initiatives to proceed more deliberately, with clearer accountability and more realistic expectations.
Nearly one-quarter of leaders surveyed admitted they have hesitated to propose or lead an AI initiative due to fear of failure or criticism, highlighting how caution at the top can cascade through organizations.
“People want to be able to have a level of insurance that they are not going to be chastised for the initiatives they take or do not undertake,” Nadkarni noted. “They want to benefit from the upside, but they also want protection from the downside.”
How Psychological Safety Translates Into Business Results
Survey results also highlighted a strong link between psychological safety and business outcomes, with 84% of respondents reporting a direct connection between psychologically safe environments and tangible business results.
At the same time, the data reveals persistent gaps: Fewer than half of respondents — 39% — describe their organization’s current level of psychological safety as “high,” while 48% rate it as “moderate,” suggesting many companies are pursuing AI transformation on unstable cultural foundations.
Leadership behavior and communication emerge as decisive factors. In fact, 60% of respondents say clearer communication about how AI will — and will not — affect jobs would most improve psychological safety.
3 Ways Leaders Can Build Psychological Safety for AI
Nadkarni frames psychological safety around AI as a leadership discipline grounded in clarity, pacing and accountability, rather than hype or reassurance alone. His guidance centers on three practical moves:
- Slow down and be deliberate. Leaders need to step back from reactive, fear-driven behavior and evaluate AI initiatives holistically — what’s at stake, what the upside is, what the downside of inaction looks like and where it makes sense to accelerate versus pause. Measured decision-making reduces panic and creates room for experimentation without overcommitment.
- Create space for experimentation through culture, not mandates. Psychological safety starts with a mindset shift at the executive and decision-maker level. Organizations must encourage entrepreneurial thinking and allow employees to explore ideas without immediate pressure to operationalize them, bringing concepts forward only once they are mature and well understood.
- Establish clear ownership and boundaries. Trust grows when accountability is explicit. Organizations need a defined leadership role — such as a chief AI officer or equivalent — responsible for setting direction, clarifying approved use cases and communicating limits so that teams know which paths are viable and which are not.
Related Article: Why AI Training Is Failing — and What Hands-On Learning Gets Right
Embrace the Fast-Fail Mindset
Psychological safety around AI depends on organizations embracing a fast-fail mindset rather than defaulting to risk avoidance, Nadkarni argues.
Leaders must remove punitive consequences for initiatives that are experimental or carry uncertainty, especially in fast-moving areas like AI. While excessive caution may feel safer from a job-security standpoint, Nadkarni warns it ultimately limits differentiation and innovation at the company level.
Creating space to test ideas, learn quickly and move on when something does not work allows teams to experiment without fear and prevents organizations from stagnating.
“A fast-fail mentality is a good way to inculcate psychological safety, because then you can say you have a culture of trying out different things, and if it doesn’t work out, you move on — you’re not going to be stuck in the mud,” he said.