Picture this: an employee sits at their desk with Gemini or Copilot open on their screen, frantically pretending to use AI as their manager walks by.
They're larping (AKA live-action roleplaying), according to social scientist Nigel Dalton, who spotlighted this phenomenon at Future Skills Fest in my former hometown of Melbourne, Australia.
That concept stems from an online survey of over 1,000 full-time US professionals. Three-quarters said they’re expected to use AI at work, but 16% reported they sometimes pretended to use it.
This clever workplace theater tells us everything about where we’ve gone wrong with AI communication.
Nurture Trust Through Shared Vision
As I wrote in my recently published book, "Attract, Retain, and Develop," the most powerful workplace solutions don’t come from algorithms — they come from understanding human needs.
When employees feel compelled to perform rather than participate, we’ve created the opposite of empowerment.
From Zoo to Jungle, But With a Guide
Dalton’s metaphor resonates: we’ve moved from a workplace "zoo" with concrete paths and clear boundaries to an unpredictable "jungle" where AI tools emerge from everywhere.
But here’s what separates thriving organizations from struggling ones: the thrivers don't abandon their people in that jungle. They become skilled guides.
Start with genuine transparency, not corporate speak. If you collect AI-usage telemetry, disclose it, explain why and set retention limits.
Deutsche Telekom co-created AI guidelines with employee representatives, committing to transparent use and reserving people-related decisions for humans. The result? Trust, smooth rollouts and employees who felt part of shaping ethical AI.
Related Article: 7 Ways Leaders Can Address AI Anxiety at Work
Make AI Human-Centered, Not Human-Replacing
The World Economic Forum’s research confirms what I’ve seen across thousands of workplace transformations: digital trust begins with workforce inclusion.
Your communication strategy around your AI policy should emphasize augmentation, not automation. Frame AI as freeing employees from repetitive tasks to focus on creative, strategic and relational work — areas that remain irreplaceable. Share real examples where AI has streamlined routine tasks and opened space for innovation. That will nudge your in-house larpers to see AI as an ally, not an obstacle.
But let's acknowledge what many leaders are reluctant to say out loud: sometimes AI will replace humans, and pretending otherwise destroys credibility before you even begin.
Address the Hard Truth About Displacement
Be upfront about automation and your safety net.
- About 40% of employers anticipate reducing headcount where AI can automate tasks.
- Goldman Sachs research estimates AI could displace 6–7% of US jobs, with impacts likely temporary as new roles emerge.
- At Klarna, an AI assistant now handles work equivalent to approximately 700 customer-service agents, and staffing has been adjusted alongside that shift.
Credibility comes from naming where automation is likely and funding AI training, redeployment and fair separation when needed. State your thresholds now — before pilots scale.
The key is transparent communication about where and why displacement might occur. Leaders who pretend otherwise lose trust and often end up making deeper cuts later.
Build Continuous Dialogue, Not One-Time Announcements
Qualtrics found 53% of engaged employees are comfortable with AI at work, versus 30% of disengaged employees. But transparency isn’t a box-ticking exercise in governance; it’s an ongoing conversation.
Salesforce uses internal AI councils to bring workers into key deployment decisions, ensuring technologies align with company values and employee expectations. The company also publicly documents its Responsible AI structures and shares adoption playbooks.
Use this as a model to create your own internal AI council with employee reps and clear feedback channels.
How to Make Your AI Policy Land With Employees
Check out these five steps to get going on your AI policy:
1. Ship a One-Page ‘How AI Works Here’ Explainer
Spell out the goal, green–red use cases, what data AI can/can’t touch, when a human signs off and where to ask questions. Take inspiration from Deutsche Telekom’s employee-co-created AI principles, then write your own in plain language. Salesforce’s AI Acceptable Use Policy shows the kind of clarity people want to see upfront.
2. Set up an Employee AI Council
Include worker representatives to review proposed use cases, surface risks and co-design training so adoption isn’t top-down. Microsoft’s partnership with the AFL-CIO (American Federation of Labor and Congress of Industrial Organizations) illustrates structured worker input feeding development and rollout.
3. Tell Augmentation Stories, Not Slogans
Every quarter, publish three short before/after examples by role (e.g., ops, finance, CX) showing what drudge work AI removed and what higher-value work humans did instead.
4. Publish Your Automation & Support Pledge
Be explicit about where automation is likely, and commit to redeployment pathways, funded learning time/budgets and fair separation standards.
5. Close the Loop With Real Listening
Run a monthly pulse on AI trust and sense of control, publish what you heard, and what you’ll change next.
Related Article: Trusting AI Agents at Work: What Employees Really Want
Empower Through Education and Agency
Implement human-in-the-loop systems with clear feedback mechanisms, where employees audit AI suggestions and offer input at critical decision points. This active participation reinforces ownership and agency while improving accuracy.
Pair AI policies with concrete reskilling opportunities. As I emphasize in my work, thriving workplaces put learning at their core. Celebrate career transitions and frame AI as a springboard for advancement, not a substitute for human insight.
Remember: we’re not just implementing technology — we’re building the future of work. Get this right, and AI becomes a tool for empowerment and inclusive growth. Get it wrong, and you’ll have more employees larping their way through another day, diminishing both human potential and technological promise.
The decision is yours. Make it count.
Learn how you can join our contributor community.