The Gist
- CMS impact. Effective CMS enhances productivity, while poor strategy can worsen content chaos.
- AI influence. Generative AI amplifies organizational strengths and weaknesses, impacting productivity and brand consistency.
- Management importance. Understanding AI content management strategy is crucial to navigate legal, intellectual property and reputational risks.
I have always said that content management systems (CMSs) have a multiplier effect on existing organizational maturity and practices. If your AI content management strategy and operations are already running smoothly, a good CMS implementation will make that faster and more scalable.
Conversely, if your existing content strategy and governance is not well-considered, adding a CMS might actually make things worse as you attempt to catalog and migrate all your content (regardless of value) and codify all your edge cases in content models and code.
The end result is all too often a mess of too many templates, content types and outdated content — not to mention the change management of disruption and having to learn a whole new set of tools.
Generative AI will take this observed trend and put it into hyperdrive. Those organizations with clear goals, strategies, structured content, governance, dashboarding and the tools to holistically handle the end-to-end operations will gain even more increased productivity and customer experience benefits as it can enable scalable personalized content and a consistent brand voice across many digital channels.
Again, the opposite is true — not only might the ill-considered use of generative AI not make things better, it now comes with even greater legal and intellectual property risk than before.
As a result, understanding content management strategy as a practice will become more critical for organizations.
AI Content Management Strategy: Let’s Talk About Those Risks
Hallucinating Policies or Products That Don’t Exist
There have been a few examples of this, but the most recent (and amusing) was an Air Canada chatbot providing incorrect information about bereavement policies to a customer then in court: “Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website."
An AI Judgment
The judge (sensibly) goes on to add: "It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot."
This should be obvious to everyone, but you have to admire Air Canada’s legal team and their chutzpah in arguing that the chatbot actually really isn’t part of their organization or customer experience. Presumably it just wandered onto their site by accident and was assisting the customer out of the goodness of its digital heart, and gosh darn, as an innocent third party had no way of knowing it didn’t have accurate information.
Content strategist Scott Kubie has an excellent detailed write-up of the entire fiasco, but I’ll simply repeat his conclusion because I agree with it entirely:
“Many other companies, maybe yours, are trying to run with AI before they’ve learned to walk with content operations. The risk of tripping and slamming face-first into concrete — or a Canadian small-claims court — is only going to grow for companies laying off content experts, UX researchers, technical writers, and others skilled in making sense of complex information spaces.”
Evangelizing AI Content Management Strategy
So back to my original point — there has always been a need for understanding and evangelizing content management strategy as a practice (and this was more and more clear to Deane Barker, global director of content management at Optimizely, Jeff Eaton, a content management expert at Autogram, and many of us in the room at a recent gathering of CMS Experts via J. Boye) — but it seems the need is even greater now; despite more organizations thinking the opposite — that AI will actually do it for them.
Related Article: What Is a Content Management System?
Leaking Intellectual Property
Nietzsche once wrote, “When you stare into the abyss, the abyss stares back at you,” and I can think of no better quote to illustrate the fact that when you are interacting with any modern digital system, not only are you a consumer of information from a service — but you are also a producer of information for that service.
In using ChatGPT to generate additional code, Samsung engineers had been uploading their code to the service — and this code apparently “leaked” to other responses. This presents a real conundrum for corporate users of these systems — the more of your data you can train against, the likelier the output will be more relevant and accurate to your use case.
But the problem is that the higher value the application, the more sensitive the data you require — this includes code, corporate strategy and analytics data. Right now, there are countless threads on Twitter and LinkedIn about ChatGPT doing high-value analysis that will replace MBAs. I know no doubt that is true — but you also have to know that your competitors and anyone else now can have the same sort of analysis that was outputted on data you uploaded.
AI Governance Issues
As noted in the article, this isn’t a problem with ChatGPT per se, but a governance issue around the use of any LLM — as a result, many organizations are starting to implement their own in-house large language models to ensure data accuracy and also avoid leaking sensitive data.
At a more controlled access level, I have even seen this myself as a vendor — as we roll out a beta program with our own generative AI offerings, savvy customers immediately ask if we are using their content to help train our models. As noted in the risks — they don’t want the learnings from their IP being packaged and resold to competitors.
Related Article: Beyond the Hype: Real-World Impact of Generative AI in Content Management
Reputational Risk From Lack of Transparency or Authenticity
One of the biggest risks to any brand is lack of authenticity, and generative AI is a veritable minefield of potential hazards here. LLMs have been pointed at a wide variety of data in the assumption that more data of varied quality is better than less — but vetted — data.
There is an obvious spectrum from inherently biased (but often in ways hard to perceive) to a racist internet troll but leaving your brand voice and values in the hands of an autonomous machine has risks — and these are the “safe” use cases of merely creating content within existing processes.
Invented Articles & Author Personas
At the other end of the spectrum is such betrayal of authenticity that it borders on fraud. Sports Illustrated fired the CEO when it was revealed that it published articles where not only was the content invented, but the authors' personas were as well.
Similarly, a Formula E team invented an “artificially created, female-presenting "AI Ambassador," completely missing the obvious optics issue around the lack of female representation or fan appreciation within the sport.
The two main issues still revolve around disclosure and authenticity — if you break either of those rules, your overall organizational credibility gets called into question.
How to Think About the Impact of Generative AI on Your Organization
In my mind, there are two distinct types of generative AI projects in AI content management strategy.
"Is this doing something we can already do (but faster or better)" or "Is this enabling something we cannot do yet?" and there is a set of trade-offs for both. Currently if you are starting to use tools that your CMS vendors are shipping, they tend to be of the first category.
Faster and Better
These are usually lower risk because they are initiated by an actual person initiating a specific task and they can verify (or discard) the output for themselves in real-time. Examples include:
- Contentful – AI Content Generator and AI Content Type Generator
- Contentstack – AI assistant
- Kontent.ai – Tagging, Translating, Generating from prompts
- Yext – Computed Fields (automatically generating summaries, bios, FAQs, etc.)
Being blunt — at this point in time, most of these work in a very similar way and generally consume the same underlying APIs from third parties (usually OpenAI) — there isn’t a lot of differentiation here. Even if vendors didn’t build these plug-ins, it would be trivial to build your own.
Enabling New Abilities
In the second camp are those novel approaches that have been too difficult to scale due to the sheer cost of making personalized content at scale.
Unlike those earlier examples, in order for this revolution to take place — content creation tooling itself will have to massively evolve. To enable the type of mass operational scale we are talking about, instead of working with individual items of content, we’ll need new paradigms for managing the “sets” of content — which may include managing hundreds of variations along different axes; audience, language, region, channel.
An example that comes to mind that is not possible to do at scale with the existing paradigms is to imagine personalizing a single piece of content with 10 audiences, 15 languages and sub-locales (such as Québécois French versus Parisian French) and four channels (web, mobile, text, email) generates 600 different (but related) distinct variations. Adding governance to ensure accuracy and brand voice, not to mention the ability to apply A/B testing to immediately retire and retry newer and more successful iterations, further complicates the process.
In other words, in the same way that the industry has moved beyond managing the page layout and copy toward managing structured content, the focus for the AI content management systems of the future may be less about humans managing individual content items, and more about managing the scaffolding, prompts and governance rules for various AI elements and transformations at scale.
Real World Uses of Generative AI
Brian Browning and Kristina Podnar had two excellent sessions on real world uses of generative AI at the J. Boye CMS Kickoff 2024 and the slides are available:
- Brian Browning, vice president, technology, at Kin + Carta talked about a number of generative AI use cases ranging from content compliance and auditing, to content generation, to messaging to data-driven optimization.
- Kristina Podnar a digital policy consultant and author of the book "The Power of Digital Policy," also covered some of the possibilities, but spent significant time on the risks and likely impending regulatory frameworks we may encounter.
How to Get Started? (And What I Am Seeing in the Market)
Learn to Walk First
Cathy McKnight and Robert Rose of The Content Advisory have a great webinar “Get Ready To Get Ready For AI,” which talks about that exact problem that Scott Kubie mentioned — how to “walk with content operations” first.
The great thing is that AI can be an organizational forcing function — if you previously struggled to get a business case buy-in for those foundational efforts that are hard to quantify, talking about AI tends to excite the C-suite in the way that the word governance absolutely does not.
Organizations Rolling Their Own LLMs
The biggest trend that I have seen is companies implementing their own LLMs. Ironically this is more work than getting the basis right — but companies love to jump to the tech solution first.
But at the very least running your own LLM solves some of the leakage and hallucination issues. Until you realize that you need a lot of information to feed those models. And that information should be well structured. With metadata. And up-to-date.
And before you know it, you’re back to where you were — but this time learning to walk alongside learning to run.
That said, I fully expect that LLMs will separate into “consumer” and “enterprise” use cases and approaches where the latter focuses far more on security and accuracy than creativity and private LLMs will probably be more of the norm.
Conclusion: Embracing AI in Content Management and Avoiding Pitfalls
At this point there have been more than enough resources developed to help organizations understand the opportunities and risks and how to move forward with AI content management strategy.
I would argue that any organization working with content at scale has no reason to not start implementing AI strategies, policies and plans to address skills gaps for the next few years. But instead of looking at AI as an opportunity to downsize or replace workers with domain expertise, they actually need to invest more in these organizational efforts if they are to avoid some major missteps.
Learn how you can join our contributor community.