Woman holding arm up in confidence with the sun shining down.
Editorial

Why CMOs Shouldn't Trust the AI Confidence Boom

3 minute read
Eric Hollebone avatar
By
SAVED
87% of marketers believe AI is accurate. Reality? Most AI content contains major flaws.

The Gist

  • AI confidence gap. Most marketers trust AI output, but research shows much of it is flawed or inaccurate.

  • Smart use matters. AI works best in low-risk areas. Avoid using it where your brand is on the line.

  • Keep humans in. Human oversight is essential when AI touches anything public-facing or strategic.

Marketers have always been early-movers on the technology adoption curve. Over the years, we've had to dive feet first into automation, social media, SEO, QR codes, chatbots and adtech, among others. So it's no surprise that marketers are embracing AI faster than most.

According to a recent survey, 63% of marketers are using generative AI, and 79% plan to expand their adoption in 2025. Another survey found that 85% of marketers are using AI tools for content creation. These numbers are significantly higher than the national average of 37% of all workers in the U.S. who say they use AI in their jobs.

Our willingness to adopt is a good thing. It keeps us in step with the markets we need to reach and one step ahead of the competition. But AI is a technology unlike any other we have encountered. It comes with bigger risks, bigger rewards and more unknowns, which means we need to bring a different kind of thought and intentionality to the ways in which we apply it to the marketing mission.

Quantifying AI Risks in Marketing

Data suggests that many marketers lack an understanding of the limitations of AI, even as they forge ahead with new AI initiatives.

To start, 87% of marketers are confident in the accuracy of AI content. But the confidence is misplaced. More than half (51%) of AI-generated content has "significant issues of some form," and a whopping 91% has at least "some issues." In the best cases, these errors can erode brand trust and marketing effectiveness. At worst, they can lead to costly lawsuits, like the one involving an Air Canada AI chatbot.

Accuracy isn't the only risk associated with AI content. The recent fiasco with Meta's AI chatbots demonstrates how quickly AI can sink into off-putting weirdness, while a meta-analysis of 2,000+ marketing campaigns found that human-generated content outperformed generative AI, with higher engagement and conversion rates.

Air Canada plane in flight in the sky.
miglagoa

3 Ways to Mitigate AI Risks

The key to balancing AI's risks and rewards is to understand it, apply it intentionally rather than opportunistically, and monitor it closely.

Understand AI 

As computational scientist and entrepreneur Stephen Wolfram said, ChatGPT is "just adding one word at a time." It’s predicting the most likely word to come after the word in front of it. This results in middle-of-the-road output that draws upon existing patterns. It can't give voice to uniquely human emotions and experiences, and it can't understand the intricacies of your company's values, mission and voice. By understanding those limitations, you can work within them to apply AI where it will produce the greatest value (and do the least harm).

Apply AI intentionally

It's hard to find a marketing tool that isn't AI-enabled. Ninety-three percent of marketers report that new AI features were added to their tech stack last year. Access to AI is not the problem; knowing where to apply it is. The purpose alignment model developed by Niel Nickolaisen, an IT thought leader and author, is a helpful reference framework. 

Given AI's current limitations, you may want to avoid applying AI to any marketing activity that is "differentiating" (mission-critical and a market differentiator). This would include customer experience, revenue generation and market positioning, among others. Instead, start by limiting AI to low-risk, low-priority areas. For example, use it to draft internal policies, procedures and job descriptions to free up more time and resources for differentiating activities.

Monitor AI closely

Human oversight is critical to all martech activities, but with a black-box technology like generative AI, it's even more important to keep a human in the loop. This is especially true in situations where AI supports customer-facing outputs and experiences. Human marketers bring context, nuance, empathy, experience and accountability that AI can't replicate when it comes to supporting brand fidelity, market relevance, messaging alignment and factual accuracy.

Related Article: Staying Human While Using Generative AI Tools for Content Marketing

Learning Opportunities

Don't Move Fast and Break Things

AI is transforming the possibilities for marketers, but its use should be thoughtful and targeted, especially when it comes to customer-facing content. There is tremendous value in being able to scale and accelerate content creation, but if it comes at the risk of eroding your pipeline or the trust built up in your company's brand, the price is too steep.

Now is not the time to throw caution to the wind. Before you experiment with AI, make sure you understand the risks and have a system in place for quantifying and monitoring them, especially if those experiments will touch your market and your customers directly.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Eric Hollebone

Eric is the President & Chief Operating Officer of DemandLab and oversees the optimization of day-to-day operations and the smooth delivery of all client work. As COO, he works closely with the CEO and executive leadership team to plan and manage the company’s operational policies and develop and help implement a plan to attain the agency’s short and long-term financial and operational goals. Connect with Eric Hollebone:

Main image: kieferpix
Featured Research