A person looking stressed out and sitting in front of a computer
Editorial

How Editing AI Copy Is Affecting My Mental Health

4 minute read
Meghan Laslocky avatar
By
SAVED
LLMs may save time, but for some of us, they come at a steep psychological cost.

Earlier this summer, the MIT study “Your Brain on ChatGPT” confirmed what we all know: LLM (large language model) usage erodes cognition. For many of us who work with LLMs on a daily basis, it’s not news that use of ChatGPT and its ilk is the equivalent of pouring battery acid on your brain, but it’s nice to have scientific validation. 

Now that we’ve got that cleared up, it’s time to take on the next elephant in the living room when it comes to LLMs: How usage affects mental health.

I’ve noticed a distinct pattern when I use an LLM to draft copy or edit copy generated by an LLM: I dread the task and when I’m done, I feel lonely, listless, isolated, forlorn, hopeless and even angry. Sometimes it’s so bad that the only logical way to recover is to climb into my bed and stare at the wall. 

Sounds an awful lot like depression, doesn’t it? Maybe that’s because it is depression. 

LLMs Drain Us on Every Level

I suspect there are two things that fuel LLM malaise. Let’s call them the micro and the macro

On the micro level, I suspect that, like what MIT’s study shows, working extensively with copy generated by LLMs is bad for my brain from a neurobiological perspective. Maybe, just like any other depressant, LLMs send the human amygdala into overdrive and connections among neurons deteriorate. 

Maybe it’s also that the fight or flight impulse is triggered: I want to nobly fight back against the horrors of bad writing, but I also want to say, ”Screw it, I don’t want this crap anywhere near me, I’m going to abandon the marketing ship become an upholsterer.” 

Then there’s the macro level or even existential level: Every time I touch copy generated by an LLM, I feel a profound sense of loss. Even grief. 

Two years ago, before working with LLMs was standard in marketing (and everywhere else), I led a stellar team of writers, each of them whip smart and eager to learn. The dynamic among us was often nothing short of joyful. Humorous conversations would mushroom in the comments of our Google docs, often so funny that other team members would hop into the doc to take a look and have a laugh. We’d talk about writing in our weekly editorial meetings and share articles or turns of phrase we’d come across that inspired us. Even books on writing, like Stephen Pinker’s "A Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century" and Stephen King’s "On Writing: A Memoir of the Craft," were shared touchpoints. 

Related Article: Drowning in AI Overload? Here’s How to Stay Ahead Without Burning Out

What We Lost When We Let LLMs Take Over

And as an editor and creative leader, I was always excited to look at new drafts. Because I worked with great writers, each draft on my list came with a sense of anticipation. Each one was like a gift for me to open. I had the privilege of seeing how the writers advanced on a daily basis, and I had the joy of doing what I love most in the world: playing with words. 

Every draft meant growth. Growth for them as writers and growth for me as an editor. I felt wise and useful, proud and energized. 

Now, as a freelance content editor, I spend most of my day editing work by writers who use LLMs in order to crank out copy quickly enough to make a living and copy produced by non-writers, or copy that’s entirely generated by AI. I never get to talk about writing, and I never get to see a writer grow or have an “ah-ha” moment that they’ll carry with them forever. 

On the one hand, I’m grateful to have work. So many wonderful writers and editors I know have simply given up trying to compete with the likes of ChatGPT. The thief of joy and income has won. 

But on the other, it’s deeply demoralizing. I have built my entire career on being an ace writer, editor and coach, and now I’m reduced to fixing unnecessary em dashes 40 times a day, trying to get ChatGPT or Microsoft Copilot to understand what a participial phrase with a gerund is so I can get it to stop cranking them out every other sentence, and not use the dreaded “It’s not X, it’s Y” construction that is now so heinously ubiquitous. There’s no esprit de corps when you’re working with a robot. 

And then there’s the terror. Terror that simply by being exposed to large language models day in and day out, my own voice is forever compromised. Over a decade ago, I published a book of nonfiction that I remain proud of. I flip through its pages and feel a mixture of pride and fear. My voice — 100% me — is in every sentence, and there are turns of phrase that still make me proud. But will I ever be able to write that way again? Are LLMs so insidious that I’m forever ruined? Can one ever recover from metaphorical arsenic? 

Learning Opportunities

We Need to Talk About AI and Mental Health

In the end, it doesn’t matter if the depression is micro or macro, limited or existential. What matters is that by relying on LLMs to the extent that we do, we’re risking not just a crisis in terms of creativity and cognition (and, I expect, mass unemployment before long), but one of mental health. 

So consider this a dual-pronged rallying cry: One for studies like that by MIT, only to assess the psycho emotional tolls of LLM exposure, and one for employers to recognize that in their never-ending race to “do more with less,” they’re putting their employees mental health at risk. 

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Meghan Laslocky

Meghan Laslocky is a brand and editorial leader within the B2B SaaS industry. She began her career in content marketing at AppLovin, a leading adtech platform, and subsequently led the content team at Vention, a custom software development company; from there, she led content at Siteimprove, a martech company. Connect with Meghan Laslocky:

Main image: Nina Lawrenson/peopleimages.com on Adobe Stock
Featured Research