Guy Bourgault recently received a call from a Fortune 500 executive seeking evidence that the company’s AI investments were delivering results. The executive shared several metrics: model accuracy above 95%, three pilots deployed, and 50 employees trained. When Bourgault, who heads agentic systems at technology and services at Concentrix, asked what had improved for the business, the line went quiet.

“They couldn’t tell me if cycle times had decreased, if customer satisfaction had moved, if they’d reduced cost to serve,” Bourgault recalls. “They had no idea if it was moving the business needle.”

Similar conversations are now playing out across industries, marking a fundamental change in their digital transformation and AI deployments. After years of technology-first implementation—where “success” meant simply deploying the system—executives are demanding proof that investments create actual economic value.

The Technology-First Era

For much of the past decade, enterprises treated digital transformation as a checkbox exercise. Board pressure to “do something with AI” led to what Mark Townsend, co-founder and CTO at SaaS marketplace AcceleTrex, calls “change for change’s sake.” Companies measured success through activity metrics—projects launched, employees trained—rather than improved throughput, operating margins, or customer conversion.

The incentive structure reinforced the challenge. IT leaders got promoted for standing up pilots. Consultants got paid for delivering technology. Few were asked whether the technology solved a business problem. “Most organizations wanted to show their board they’d built this thing,” Bourgault says. “But going from pilot to scale requires being intentional about setting it up for success. Most skipped that part.”

The result was what Francis Brero, VP of AI strategy at business-to-business and software provider HG Insights, calls “workslop”—enterprises drowning in low-quality AI-generated content that creates more work than it saves. Companies were using AI for what they were already doing poorly. According to a Harvard Business Review study, workslop costs organizations $9 million in lost productivity per 10,000 employees.

The Lack of a Business Baseline

What largely enabled technology-first deployments was a failure in business fundamentals: most enterprises hadn’t measured existing performance, making it impossible to assess outcomes effectively. Too often, organizations optimized for AI’s technical performance—did the model work?—rather than business value created. And they focused closely on cost reduction, missing paths to value creation.

“The biggest mistake we see is being too myopic or focused on cost reductions as the way AI will drive value,” Bourgault explains. “If you only think about how many humans were doing that before and how many you can eliminate, you’re not going to realize the value from AI,” Bourgault says.

The cost-only mindset also missed how AI improvements compound across multiple business functions. A knowledge base query system doesn’t just reduce contact center costs—it improves website interactions, mobile chat, and employee engagement. But organizations that measure only headcount reduction never capture full business optimization.

The Shift to Business Outcomes

Several forces converged to drive change. Boards demanded tougher ROI proof as AI budgets ballooned. CFOs wanted evidence of shareholder value. And a new generation of AI-native companies emerged with dramatically different economics.

Jeremiah Owyang, general partner at Blitzscaling Ventures, points to a striking metric: AI-native startups generate $2.2 million in revenue per employee, versus $220,000 for traditional SaaS companies—a 10x difference. These companies design workflows in which AI agents complete tasks alongside humans, caring only about outcomes, not whether a person or an algorithm did the work.

“Traditional enterprises think about an org chart,” Owyang says. “AI startups think about workflows. What is the goal? Let’s design a workflow to achieve it.”

That workflow mindset is filtering into how forward-leaning enterprises approach AI value. Instead of asking “Can we deploy this technology?” They ask, “Which business capability should improve?”

Townsend describes this as a healthy shift from a technology to a business focus. “Leaders are no longer asking, ‘Did we deploy AI?’ They’re asking, ‘Did this reduce cost, increase revenue, or materially improve customer experience?'”

How Measuring Value is Evolving

That shift in priorities, as cited by Townsend, underlies the motivation to evolve the metrics for measuring business value. Bourgault explains that Concentrix uses an “agentic value map” to measure value creation. The agentic value map is a framework for mapping business processes to where AI can be broadly leveraged. The approach forces organizations to understand all the ways a single AI capability can contribute to business value, rather than measuring it against a single function.

Basic metrics are also evolving. Consider how contact centers are moving beyond first-call resolution to focus on inclusion rate—the percentage of customer interactions in which AI agents participate. HR departments track time to complete specific tasks across entire processes, not just the AI-automated portions. Sales teams measure overall productivity and task completion frequency, as AI often supports human work.

Brero advocates for what he calls “task decomposition”—breaking jobs into component tasks, mapping them to automation potential, and calculating the percentage of wage value in automatable tasks. “The breakthrough comes from asking ‘which tasks within this job can AI perform?’ instead of ‘can AI replace this job?'” he explains. “Automating 80% of tasks is already a massive time saver. Most organizations get stuck trying to automate the final 20% that requires human judgment. That’s where projects die.”

Poor Tech Hygiene Holds AI Deployments Back

Perhaps the most challenging lesson emerging from the value-focused deployments is that AI performance depends less on AI capabilities than on fundamental business hygiene. Data quality, content readiness, process maturity—all the things enterprises have struggled with for decades—turn out to matter more than model sophistication.

“The most important factors related to AI proving its value do not have to do with the AI itself,” Bourgault says. “They have to do with the underlying factors in a business that contribute to performance. Data quality, data readiness, workforce readiness—these things are hugely impactful on AI’s ability to move the business needle.”

Owyang sees a similar pattern in the companies he invests in. The most successful enterprise AI deployments start with what he calls “sorting through the moldy data lakes”—using AI agents to comb through years of unstructured data, finding opportunities that would be impossible to structure manually. For example, a large beverage company parsed receipts and order forms with its AIAI, identifying follow-up sales opportunities that had been buried in unstructured data for years.

What Success Looks Like

Companies that successfully focus on business value outcomes share common patterns. They start with a single measurable business objective rather than a technology use case. They first baseline their current performance. They instrument processes so results are observable and attributable. And they communicate value in business terms.

“The message should be: here’s what we improved, here’s how we know, and here’s what it’s worth in dollars, hours, or customer impact,” Townsend explains. “This creates a portfolio view of what’s working, what’s stalled, and where to double down.”

The portfolio strategy matters because not every AI initiative delivers equal value. Organizations need frameworks to decide what to scale, refine, pause, or sunset. Measuring the right outcomes enables those decisions.

For the executive who called Bourgault that Tuesday morning, the conversation moved from defending the pilot to understanding what the business needed the technology to accomplish. It’s a shift happening across the enterprise landscape—not because the technology changed, but because the questions finally did.