woman hiding her face behind a drawing of a face
Editorial

The Quiet Politics of Admitting AI Use

4 minute read
Owen Chamberlain avatar
By
SAVED
AI use in the workplace is on the rise, yet few admit to using it. What can organizations do?

A curious silence has emerged in the workplace. AI use is on the rise, but few admit to using it. When they do, it comes with a nervous smile or caveat, framed around phrases such as “I used it just for a rough draft,” or “only to clean up the structure.” 

Comments like this highlight our awkward relationship with AI: do we feel like a fraud for using it or are we arguing for the legitimacy of its use in professional settings?

Our Ambivalence About AI Use in the Workplace

The ambiguity around AI use in tasks traditionally undertaken entirely by humans is less about company policy, and more about personal power. Specifically, how we perform competence, how we signal effort, and how we negotiate trust with colleagues, managers and even customers, when machines co-author our output.

Some people confidently share their use of AI, acknowledging it as a co-author or discussing the prompt-creation process to create the output. 

Others conceal it, even as the signs give them away. The AI voice has clear linguistic fingerprints: awkward fluency, flattened tone and an ambiguous relationship with em dash overuse. Whether this is good or bad depends on both an individual and organization's judgements on effort vs. efficiency vs. reception. If something is quick to create and sticks with the reader, is it more acceptable to use AI tools than something that gets the job done but lacks adoption?

Judging Ourselves and Others

Behind this is a deeper question: What are we basing our judgement on when we trust someone’s work? Is it the effort we imagine went into it, the perceived authorship or the honesty of how it was made? In our tech-driven enterprises where innovation and efficiency is king, we’re still clinging to outdated proxies for value. One is the myth of visible effort: that “good work” must have cost something in time or toil. Another is the idea of individual authorship, even in team or tool-mediated settings.

AI innovation has complicated this picture. It speeds up ideation, enhances grammar, refines tone; yet its presence is often masked. KPMG recently found that 42% of knowledge workers hide their use of AI. Does using AI make a worker ‘less’ in some capacity, with an atrophying of skills accompanying increased adoption? Perhaps it is the feeling of cheating the system by producing work with less effort than previous generations, that makes employees hide its use.

It is in these moments that French theorist Michel Foucault would say that employees build a panopticon, spying on own and others' use of AI. Workers are now both the producers and the surveillants of AI legitimacy.

We watch each other’s language, parsing for signs of “realness.” We self-monitor, tweaking prompts until the voice sounds more “us.” We navigate a space where using AI is expected, but admitting it still feels risky. The result? A loss of potential innovation and efficiency, and a stonewalling of ideation. Instead, a patchwork of risky adoption takes place, creating security issues and cultural fragmentation.

However this isn't just about technology. It is also about visibility. Foucault reminds us that confession has always been a mechanism of power; not just in religion, but also in institutions. To confess is to become connected within a system, and conversely to withhold is to remain suspect. So we construct elaborate rituals of partial disclosure, such as “this section was mine” or “the tool just helped with formatting.” It’s less a lie than a strategy of face-saving.

Not only is security risk and cultural fragmentation a problem, another issue emerges: as AI-generated content floods the workplace, a deadening effect emerges. Language becomes flattened and intent gets harder to read. In trying to sound professional, we risk sounding the same. Some are already calling it the “dead internet” effect or, more formally, model collapse: where AI consumes AI in an infinite loop of polished mediocrity, with hallucinations driving content entropy. Again, we move from innovation to stagnation, driven by a fear of being found out.

Realign Focus Through Trust and Transparency in AI Use

So what can organizations do? AI is here to stay, and is entrenched in workflows across workplace cultures. What still needs to be built is trust in professional judgement.

Learning Opportunities

The focus moves from whether AI was used toward how clearly the human role is articulated. Trust means transparency. This doesn’t mean a simple declaration of “this was AI-generated,” but instead reclaiming authorship in a way that foregrounds intent. AI tools are neither good nor bad, but how we understand our relationship to them and our usage of them is.

Organizations can accelerate toward trust and transparency in AI use by:

  • Normalizing thoughtful disclosure. Create cultures where admitting AI use isn’t an act of guilt, but a sign of fluency. Highlight ethical efficiency, where the use of AI is a win for business speed and quality output.
  • Reward clarity of process, not just polish of output. Create spaces to explore why content hit the mark and how AI and humans interacted to achieve it.
  • Finally, rethink how professional expertise is defined: not as resistance to AI, but as mastery of its integration.

Trust isn’t lost when AI enters the room. It’s lost when no one will say it was there.

Editor's Note: How else is AI challenging workplace norms? Read on:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Owen Chamberlain

Owen Chamberlain is a strategist, writer and speaker with 15+ years of experience in organizational transformation, remote work culture, and the future of leadership. He currently works at a Fortune 500 company, shaping strategy at the intersection of people, systems, and power. Connect with Owen Chamberlain:

Main image: crazy cake | unsplash
Featured Research