A year ago, “we want more AI” was often enough to get a pilot funded. Now the bar is higher.
“We want AI… but we need to see the ROI first.”
That’s not anti-innovation. It’s what happens when AI moves from curiosity to capital allocation. Boards and exec teams are asking the right questions: What is the payback period? Where does the value hit the P&L? How do we scale impact without scaling risk?
The problem is that many AI ROI conversations still begin with the most uncertain choices, with those being model selection, vendor selection, or a grab bag of potential use cases. That’s tough when most enterprises don’t have a “portfolio” of AI projects. They might have one pilot, a few experiments, or nothing in production.
If you’re trying to prove ROI before you expand AI investment, start by clarifying the economic factors specific to your business. Understanding your business context is crucial for turning AI into a consistent source of value rather than just a series of costly, isolated projects.
Adoption is up. Consistent value is not.
McKinsey’s global survey in March 2025 reported 78% of respondents said their organizations used AI in at least one business function. (McKinsey) In McKinsey’s November 2025 update, that number rose to 88% reporting regular AI use in at least one business function. (McKinsey)
That sounds like “AI is everywhere,” but the same McKinsey report notes that most organizations still haven’t scaled AI at the enterprise level. Roughly one-third report that they’ve begun to scale their AI programs, while the majority remain in experimenting or piloting stages. (McKinsey)
BCG’s 2025 research is even more blunt, only 5% of companies in its study are achieving “AI value at scale,” while 60% report minimal revenue and cost gains despite substantial investment. (Boston Consulting Group)
Leaders are reacting to a clear reality. AI adoption is becoming normal; AI ROI is still uneven and hard to count on. If you want funding, you need a thesis that increases the probability of being in the minority that consistently captures value.
When an AI initiative fails to deliver ROI, it’s rarely because the model couldn’t generate an answer. The failure is almost always operational. The organization can’t trust the output enough to act, the workflow can’t absorb it, or governance can’t endorse it.
That “last mile” is where ROI lives or dies. It’s the difference between:
In practice, this is what kills the economics. If humans must verify every answer, reconcile every definition, and manually resolve every exception, your AI becomes an expensive assistant rather than a scalable capability. The cost of “human-in-the-loop” balloons, timelines stretch, and the initiative never becomes a dependable ROI story.
So the ROI question becomes less “which AI?” and more:
“What makes AI trustworthy enough to embed in real decisions, and reusable enough to scale?”
The answer is context.
Most enterprises don’t have dozens of AI projects. But they absolutely have years of investment in systems and operations work that encodes business meaning:
Whether you call it that or not, all of that effort is trying to answer the same questions: What is a customer? What is a case? Which system is the source of truth? Which rules govern exceptions?
That is your enterprise context.
The issue isn’t that context doesn’t exist. The issue is that it exists in fragments, re-encoded across tools, teams, and documents. Over time, those fragments diverge, definitions drift, and policies get implemented differently in different systems. Data gets mapped one way in analytics and another way in operations.
AI doesn’t create this fragmentation. AI just makes it impossible to ignore, because inconsistent context produces inconsistent outputs, exactly the thing executives are least willing to tolerate when they’re demanding ROI.
Here’s why context is such a powerful ROI-first framing. Even if your AI footprint is small, the cost of fractured context is measurable today.
A commissioned study conducted by Forrester Consulting on behalf of Airtable reported that large organizations (20K+ employees) use an average of 367 software apps and systems, and respondents reported spending 30% of their week trying to find the right data and information. (Airtable)
That’s not an “AI cost.” That’s an operating cost created by fragmentation, like time spent searching, reconciling, re-asking, and re-validating basic business facts. It shows up as slower decisions, slower customer response, more escalations, and more rework.
On the technical side, Anaconda’s State of Data Science 2020 report found respondents reported spending 45% of their time getting data ready (loading and cleansing) before they can develop models and visualizations. (Anaconda) This is another context signal. When skilled teams spend nearly half their time just making data usable, your “delivery capacity” shrinks. It means fewer initiatives delivered per year, longer time-to-value, and more dependence on services and contractors.
Fragmented context is a productivity and delivery tax you’re already paying. AI doesn’t need to be widespread for that to be true.
When executives say, “we need to see the ROI first,” they’re really saying, “reduce downside risk and increase predictability”.
Context is one of the few moves that does both.
It reduces downside risk because you can improve context without betting on a specific model, vendor, or use case. Better context makes analytics, workflow, and operations more coherent even if AI plans change.
And importantly, this aligns with what market data is telling us about value concentration. If only 5% of companies are achieving AI value at scale while 60% see minimal gains, (Boston Consulting Group) then the most rational move is to invest in the foundations that separate repeatable value from stalled experimentation.
A cost-only story is incomplete. Executives will (rightly) ask, “How does this grow revenue or protect margin?”
Context drives revenue in a few ways that are easy to explain without turning the post into a technical architecture lesson.
First: time-to-market becomes a growth lever. When teams don’t have to renegotiate definitions and rebuild mappings for every initiative, new capabilities ship faster. Faster shipping doesn’t just “feel good”, it pulls revenue forward. You launch earlier, you capture adoption earlier, you learn faster, and you reduce the opportunity cost of delays.
Second: trust becomes automation. Most of the revenue-adjacent value in AI is not “better text generation.” It’s enabling decisions and actions like approvals, routing, triage, exception handling, and compliance checks. Those are only automatable when the system can reliably interpret the business context and explain why it acted.
Third: reusable context enables productization. Once your core entities, rules, and policies are reusable, it becomes feasible to offer consistent AI-assisted experiences across channels like service, sales, operations, and partner portals without rebuilding the brain each time. That’s how AI becomes an enterprise capability rather than a set of point solutions.
IDC’s Microsoft-sponsored “Business Opportunity of AI” research reports an average 3.7x ROI for every $1 invested in generative AI initiatives, and it notes that top leaders realize 10.3x returns. (Microsoft) Even if you treat those as self-reported and directional, they’re useful framing to show that when AI is embedded effectively, it can produce substantial returns. Context is one of the most practical levers for getting from “we tried it” to “we monetized it.”
If you want this to hold up in an ROI conversation, you need a baseline that connects to spend categories executives recognize. You need a credible range and a plan to improve it.
Here’s a practical approach that works even if your organization has little AI in production:
External benchmarks help you sanity-check your internal estimates. If large organizations report spending 30% of their week searching for the right data across a sprawl of systems, (Airtable) and data teams report spending 45% of their time just getting data ready, (Anaconda) it is rarely controversial to conclude “we spend real money reconciling meaning and finding truth.”
That baseline gives you a clean “ROI-first” business case: Reduce duplicated context effort, reclaim delivery capacity, and compress cycle times to be able to apply AI on top of a coherent foundation.
At Hyland, we see the “AI, but ROI first” shift as a signal that enterprises are ready to move past experimentation and start building repeatable economics. The fastest path to ROI isn’t chasing the next model, it’s reducing the friction that prevents AI from becoming operational, and that’s inconsistent definitions, disconnected systems, and policies that live in people’s heads instead of in machine-usable form.
That’s why we introduced the Enterprise Context Engine. The shared context layer designed to deliver a unified, dynamic perspective on organizational operations by linking content, processes, people, and applications, and serving as a continuously updated “living record” across systems like ERP, CRM, and EHR. On our platform, we pair that with the Enterprise Agent Mesh, so purpose-built agents can operate with consistent context and drive automation and decisioning in domain-specific workflows.
When you translate that into business terms, the intent is straightforward:
If leadership is gating AI investment on ROI proof, the goal is to generate measurable evidence quickly, without launching a dozen pilots.
In the first 30 days, run the context audit and publish a one-page summary that highlights where definitions diverge, where rules are reimplemented, and where time was lost to reconciliation and rework. Treat it like a financial baseline that shows what fragmentation costs us today.
In days 31–60, pick one domain that matters commercially (customer, case, claim, policy, order) and standardize it enough to reuse. That doesn’t mean boiling the ocean. It means agreeing on sources of truth, establishing a canonical definition set, and capturing a short rulebook for the decision points that drive the most exceptions and escalations.
In days 61–90, prove reuse in two places. One can be operational (workflow, analytics, automation) and one can be AI-enabled (assistant, triage, decision support). The goal is to show that shared context reduces build time, reduces SME cycles, and improves a business KPI (cycle time, throughput, error rate, deflection, or time-to-resolution).
This is also the moment where the Hyland Enterprise Context Engine fits naturally as the productized form of what you’re proving. It’s a shared, governed layer for entities, relationships, policies, and provenance that multiple workflows and AI systems can rely on.
Leaders are right to demand ROI before expanding AI investment. The market is full of experimentation, and only a small minority are translating AI into sustained financial impact at scale.
The fastest way to make AI ROI predictable isn’t to buy more AI. It’s to reduce the friction that makes AI expensive to deploy and hard to trust, that being fragmented definitions, disconnected sources of truth, and business rules that get re-implemented differently in every system. That friction already has a measurable cost in delivery capacity and day-to-day productivity in large enterprises.
If you want an ROI-first path forward, start with the simple move of quantifying your context spend, standardizing one high-leverage domain, and proving reuse across two initiatives. That creates evidence, not promises. It also turns context from a recurring expense into a reusable asset that makes every future AI, automation, and analytics investment cheaper to deliver and faster to monetize.
At Hyland, that’s the operating model we’re building toward with the Enterprise Context Engine by helping enterprises turn context into a governed, reusable foundation so AI can deliver repeatable ROI, not isolated experiments.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.