Editor’s Note: This is part of a series on Search and AI derived from the EIS webinar series AI and Search: Navigating the Future of Business Transformation. Patrick Hoeffel — managing partner at Patrick Hoeffel Partners — Sanjay Mehta — an advisor at Earley Information Science — and Eric Immermann — practice director of search and content at Perficient — contributed to this article.
In recent months, the conversation around generative AI has shifted from hype to reality, and it’s clear that while everyone is talking about it, very few organizations have successfully moved past the experimentation stage to true production environments. The gap between expectations and reality is stark, and many companies are still grappling with fundamental questions like, “What exactly is this technology?” “Is it relevant to my business right now?” and “If so, where is the greatest business value?”
At the heart of this challenge lies a fundamental truth: The success of generative AI projects hinges on the quality of the foundational elements — data, knowledge architecture and content processes. Without well-conceived information architecture, AI simply cannot function effectively. This is why a strong foundation of well-organized, consistent information is crucial for successful AI deployment.
Lessons From the Trenches: The Role of Information Hygiene
As my colleagues and I discussed in a recent webinar, the biggest barrier to successful generative AI deployment is not the AI itself, but rather the state of an organization's data and information ecosystem. When companies attempt to integrate AI without considering their data hygiene, metadata standards and information architecture, they often find themselves unable to achieve meaningful results.
Consider a recent client example. A project for a trucking maintenance service provider with a third-party AI vendor attempted to develop a “mechanic’s assistant” application. The project was unsuccessful because the vendor did not use a structured methodology for developing the underlying elements to the solution. They didn’t develop use cases to test functionality against, lacked a well-defined data and information architecture and failed to properly tag and componentize their content.
As a result, the AI struggled to deliver meaningful answers for techs in the field working on customer equipment. Generative AI is not magic; it requires well-defined metadata and appropriate content lifecycle management to be effective. A new effort that focused on these foundational elements necessary for successful deployment provided the needed functionality for the mechanic’s virtual assistant.
Deploying these systems requires more than just implementing a large language model (LLM). Every step of the process matters: What data will be accessed? How do we integrate multiple data silos? How do we resolve inconsistencies in terminology across systems? Addressing these core questions is what determines success or failure.
In work with global enterprises, we’ve consistently found that taxonomy, ontology and data harmonization are key. For example, a large healthcare client struggled to see value from their AI initiatives until they developed a consistent taxonomy across their data sources. This allowed the AI to understand and effectively leverage the data, providing meaningful insights rather than incomplete or incorrect information.
Related Article: LLMs Are Hungry for Data. Synthetic Data Can Help
AI Anxiety: Addressing Business Fears Head-On
A significant theme observed is the fear that businesses have about generative AI. This fear often stems from a lack of understanding about what the technology will mean for their operations and, perhaps more importantly, for their employees. Many fear losing control — over technology, over business processes and even over job security. There is also apprehension about how AI might lead to misinformation or unpredictability in operations.
To address these fears, it’s important to focus on AI's role in augmenting rather than replacing human capabilities. For instance, a financial services client of ours used AI to automate the categorization of customer inquiries, which allowed their customer service representatives to focus on more complex and rewarding tasks. This not only improved employee morale but also delivered better customer service. The key is to use AI to handle repetitive, lower-value activities, thereby freeing up people to do the work that truly requires human insight and creativity.
Another common fear involves the potential for AI to produce incorrect or misleading information — what is often referred to as “hallucinations.” This is why strategies like retrieval-augmented generation (RAG) are so vital. By grounding AI responses in verified, enterprise-specific content, RAG helps ensure accuracy and minimizes the risk of misleading outputs. A global consulting firm we worked with found success by using RAG to enhance their internal knowledge management system, reducing errors and building trust in AI-generated content.
Building Business Impact: AI Use Cases That Work
For AI to be successful, businesses need well-defined use cases — specific scenarios that AI can address effectively. This is not just about building a chatbot; it’s about understanding where AI can provide the most value within a process. Is it answering support questions faster? Improving call center efficiency by providing real-time access to information? Those high-level scenarios have to be broken down into detailed tasks that can be tested and verified. A use case must have a clear, unambiguous outcome that can be tested and measured.
Organizations should start with narrowly defined, high-impact use cases. For example, a telecommunications client sought to improve customer support operations. They began with AI-assisted troubleshooting for common connectivity issues, demonstrating measurable improvements in response time and customer satisfaction. This initial success set the stage for further AI deployment across other customer support areas.
One of the key recommendations is to approach generative AI deployment as a series of targeted interventions rather than a complete overhaul. AI can play a crucial role in specific parts of processes, such as call deflection in customer service, which frees human agents to handle more complex interactions. In content management, AI can support tagging and metadata generation, making information more discoverable.
The reality is that AI deployment requires a knowledge architecture-first approach. Before generative AI can answer business-critical questions, a solid foundation of well-curated, harmonized data is essential. Otherwise, the system ends up failing not due to lack of technology to address information overload, but because of “filter failure” — the inability to deliver the right information at the right time due to poorly organized information.
Related Article: Data Mongering is the Silent AI Threat to Privacy and Personal Autonomy
Bridging the Gap Between Pilot and Production
Moving from pilot or proof-of-concept (PoC) to production is often a significant challenge. Many organizations demonstrate impressive results in a curated pilot environment, but struggle to replicate that success at scale. This is usually because PoCs are often implemented in controlled environments where data is clean, and expectations are managed. Transitioning to production involves dealing with inconsistencies, integrating multiple systems, managing less-than-perfect data and addressing a broader set of organizational needs and therefore less easily managed expectations.
It is important to break efforts down into smaller, manageable steps — a “proof of value” (PoV) project. These are targeted deployments that demonstrate tangible business outcomes in specific areas, which can then be scaled.
For instance, a multinational manufacturing company successfully moved from PoV to broader production by initially focusing on improving internal knowledge retrieval for field service engineers. This targeted approach reduced the time needed to find information, resulting in quicker service times and improved customer satisfaction. Its success required planning for wide deployment and data readiness as part of the PoV.
The Importance of Data Quality and Governance
The quality of data directly impacts the quality of AI output. Without good data hygiene, AI systems will produce unreliable or irrelevant results. This is why organizations must invest in data governance frameworks, metadata management and content lifecycle processes before deploying AI solutions. Data must be curated, appropriately tagged and accessible for AI to function properly.
For example, one of us (Eric) worked with a healthcare client where inconsistent data across departments made it difficult for their AI systems to retrieve accurate information. By creating a unified metadata model and implementing strict data governance practices, the program ensured that the AI used consistent, up-to-date data, which significantly improved the accuracy of outputs.
Maintaining data quality is also crucial to mitigate the risks of bias in AI models. Unbalanced training data can lead to biased outcomes, which is why organizations should conduct regular audits of their data to identify and address potential biases. A strong data governance framework helps reduce these risks, ensuring ethical and fair AI systems.
Leveraging Retrieval-Augmented Generation (RAG)
RAG is an effective strategy for ensuring accuracy in generative AI. Unlike traditional generative AI, which produces responses solely from training data, RAG incorporates a retrieval mechanism to pull information from a verified knowledge base. This approach grounds AI-generated content in accurate, enterprise-specific information.
In a recent project with a global consulting firm, RAG was implemented to enhance their internal knowledge management system. By integrating RAG, the AI could reference the most up-to-date company documents, providing accurate and contextually relevant responses. This reduced hallucinations and improved overall trust in the AI's output, particularly in environments where compliance and accuracy are critical.
Related Article: Is My AI Lying to Me? Ensure Accuracy With Retrieval-Augmented Generation
Why AI Still Needs Human Oversight
Despite the advances in AI capabilities, human oversight remains crucial. A “human-in-the-loop” approach ensures that AI systems are continuously improved through human feedback and intervention. This is particularly important in areas that require a high degree of nuance or contextual understanding.
In work with a publishing company, a human-in-the-loop system was implemented for content creation. The AI generated initial drafts of articles, while human editors refined the tone, ensured accuracy and added unique insights. This collaboration between human expertise and AI efficiency not only sped up the content creation process but also maintained the quality and reliability of the published material.
Moreover, involving subject matter experts in the training and fine-tuning phases of AI deployment helps address the limitations of LLMs. Experts provide the context that may not be present in the training data, reducing the likelihood of biased or inaccurate outputs. Human feedback loops are also essential for adjusting AI models to better reflect organizational changes, such as new policies or shifting business priorities.
Building a Strong Foundation for AI
Generative AI offers transformative potential, but only when grounded in a strong foundation of information architecture. It’s not just about deploying the latest technology; it’s about ensuring that technology can work effectively with the data and content you have. This means investing in content quality, metadata and governance structures that ensure consistent, reliable information.
The future of AI is bright, but it’s crucial to proceed with the understanding that good outcomes start with good preparation. As we move forward, let’s remember that the real value of generative AI is not in its ability to generate text, but in its ability to connect people with the right information, in context, at the right time. That’s what turns “cool” into “profitable” and makes generative AI a strategic asset for true business transformation.
By focusing on information architecture, data quality and human oversight, organizations can harness the full power of generative AI to not only automate processes but to fundamentally enhance their ability to serve customers and employees, innovate and grow.
Learn how you can join our contributor community.