Last year, I interviewed Nate Nichols, head of AI and VP of product management for Salesforce Tableau, about the company’s work in agentic AI. One point from that conversation stands out even more today: many organizations still rely on undocumented, person-to-person knowledge to make sense of their data.
As Nichols explained, “Historically, best practices were not followed by many of our customers. Much of the critical knowledge — such as the meaning of requests, which published data sources were appropriate, what terms meant, which data sources were reliable and when they could be used — existed only in analysts' heads rather than being documented.”
In an AI-powered enterprise, this kind of tribal knowledge quickly becomes a bottleneck. Agentic AI systems are designed to do more than retrieve information — they are meant to reason, decide and take action. But they can only do that effectively when they operate from a trusted and shared understanding of the business. If the meaning of key metrics, the right source of truth or the proper business context lives only in scattered teams or individual experts, AI agents will struggle to deliver reliable outcomes at scale.
Table of Contents
- Why Agentic AI Needs a Semantic Layer
- Semantic Layer Adoption Still Has Ground to Cover
- Companies Chasing AI ROI Invest in Data Foundations
- Why Organizations Are Prioritizing Semantic Layers Now
- Consistent Definitions Top the AI Data Agenda
- AI Can’t Scale Without a Shared Understanding of the Business
Why Agentic AI Needs a Semantic Layer
This is why the semantic layer is becoming so important.
A semantic layer provides the governed business context that agentic AI needs to function with consistency and trust. It defines what terms mean, which data should be used and how business logic should be applied across the enterprise. In effect, they turn fragmented institutional knowledge into something operational and repeatable. As Nichols put it, “To succeed, organizations must get their data houses in order. Only with this foundation can AI be used to reliably answer business questions.”
Ben Schein, Senior VP of Product at Domo, made the very same point: “We’re moving from AI that can talk about the business to AI that can help operate it. The semantic layer is what makes that possible, because it gives agents a governed understanding of how the business actually works. That’s what closes the gap between insight and action.”
This distinction matters. The future value of AI will not come from systems that merely generate fluent answers, but from systems that can safely and intelligently participate in how the business is run.
Recent research from Dresner Advisory Services reinforces this shift. Among organizations at an advanced level of AI maturity, 45% report having semantic layer and data virtualization capabilities in place today, with another 22% planning deployment within the next 12 months. Among organizations at the intermediate level of AI maturity, 46% already have these capabilities deployed, and another 31% expect to implement them within a year.
The pattern is hard to miss: the more mature an organization is with AI, the more likely it is to have invested in the foundational data capabilities that make agentic AI viable. In other words, semantic layers are not a “nice to have” architecture decision for the future. They are rapidly becoming a prerequisite for turning AI from a promising assistant into a dependable business operator.
Related Article: What Effective Data and AI Leaders Do Differently
Semantic Layer Adoption Still Has Ground to Cover
As a reminder, a semantic layer creates a shared language between data teams and business users by standardizing metrics, definitions and business logic across tools and systems. They reduce ambiguity, increases trust and make data more discoverable and usable — not only for people, but also for machines. As we have been saying, autonomous systems cannot operate effectively if they are forced to interpret inconsistent definitions, unclear ownership or conflicting sources of truth.
Given how critical this capability is becoming to business competitive advantage, one might expect organizations to be moving faster than then data shows. Yet as of 2026, only 36% of survey respondents say they have semantic layer and data virtualization capabilities in place today.
That is progress, but it still means most organizations have not yet established the kind of governed data foundation needed to scale AI. The more encouraging news is that another 26% plan to implement these capabilities within the next 12 months. If those plans hold, adoption would reach 62% — pushing semantic layers and data virtualization into majority territory for the first time.
Companies Chasing AI ROI Invest in Data Foundations
That shift is important, because organizations increasingly recognize that these capabilities are not simply infrastructure projects — they are strategic enablers of achieving business value with AI. Survey respondents consistently indicate that semantic layers and data virtualization are either important or critical to their success.
Among organizations capturing extremely high ROI from their BI investments, these capabilities stand out even more sharply. In fact, they are viewed as creating substantially more important for ROI than nearly every other area of business intelligence investment.
The signal becomes even stronger when looking at the highest-performing organizations. Eighty percent of organizations that report their BI efforts are extremely successful rate semantic layer and data virtualization as critical or very important. Likewise, 88% of organizations investing in AI because they believe it will disrupt their industry see these capabilities as important, very important or critical.
This is not a coincidence. The organizations most serious about transformation are also the ones most likely to understand that AI value depends on trusted, well-structured and business-aligned data.
The message to organizations embarking on their agentic AI journey is straightforward: if you want intelligent systems to do more than generate interesting outputs, you need to give them a reliable understanding of how your business works. This starts with the semantic layer. The good news is that organizations increasingly seem to know where to place their bets. The question now is whether they will move quickly enough to turn that understanding into action.
Why Organizations Are Prioritizing Semantic Layers Now
Without question, these capabilities help organizations support modern use cases. These require on-demand, consistent and governed access to data across a growing number of sources.
Increasingly, business and technical buyers expect semantic layer functionality — often built on virtualized approaches to data integration — to be part of a comprehensive analytical data infrastructure (ADI) strategy. At the same time, many organizations are rethinking how critical data should flow across the enterprise.
Rather than relying by default on physical, point-in-time data movement through traditional ETL processes, they are shifting towards more real-time, flexible and virtualized approaches that better support the speed, scale and adaptability required for analytics and AI.
Consistent Definitions Top the AI Data Agenda
Organizations are pursuing a broad set of objectives when deploying semantic layer and data virtualization capabilities, but one priority stands above the rest: ensuring consistent business definitions across BI and AI tools and applications.
More than any other goal, this one reflects the growing need for organizations to create a common understanding for metrics, terms and logic that can be used reliably across both human decision-making and machine-driven systems. A majority of organizations — 58% — identify this as a critical objective.
The next tier of priorities is also revealing. Nearly one-half of organizations say that simplifying data access for business users, improving governance and trust and enabling self-service analytics are critical goals. These priorities show that organizations are not just looking for better technical plumbing. They want to make data easier to use, more trustworthy and more accessible across the enterprise.
This emphasis becomes even more pronounced for top-performing organizations. Those that describe their BI efforts as extremely successful prioritize enabling self-service analytics 16% higher, on average, than their moderately successful peers. This suggests that success is not only about controlling and governing data, but about empowering the business to use data more independently and effectively. At the same time, reducing the burden on data engineering emerges as a meaningful objective, reinforcing the need for architectures that can scale access without requiring constant manual intervention from technical teams.
At the same time, organizational priorities shift depending on its level of AI maturity. Among organizations at intermediate levels of AI maturity, the goal of ensuring consistent business definitions across BI and AI tools is rated 17% higher than among their peers. This makes sense: once organizations move beyond experimentation and begin trying to operationalize AI, inconsistencies in definitions and data logic quickly become obstacles. Not surprisingly, organizations that have advanced beyond the experimental phase are far more likely to already have semantic layer and data virtualization capabilities in place than those still at emerging levels of AI maturity.
Related Article: AI Runs on Data: Why Analytical Infrastructure Determines Who Wins
AI Can’t Scale Without a Shared Understanding of the Business
Semantic layers and data virtualization are no longer back-end architecture choices that can be deferred for a future modernization cycle. They are quickly becoming the operating foundation for AI-ready enterprises.
As organizations move from experimenting with AI to embedding it into decision-making and business execution, the need for governed definitions, trusted data access and consistent business logic becomes impossible to ignore. The companies moving fastest to agentic AI deployment are also the ones investing in the data foundations that make it reliable.
The lesson is clear: before AI can truly help run the business, organizations must first teach it how the business works.
Learn how you can join our contributor community.