After more than a decade of aggressive cloud migration initiatives, enterprise technology leaders are recalibrating their infrastructure strategies around what industry experts call a “cloud Smart” strategy—one that emphasizes workload optimization over blanket migration mandates.
The shift comes as organizations confront mounting pressure to justify years of hefty cloud costs. Industry data from the enterprise software delivery and DevOps platform Harness suggests that enterprises currently spend about $45 billion annually on underutilized or poorly optimized cloud infrastructure, which accounted for roughly 21% of total cloud budgets in 2025. If accurate, that’s a staggering amount of waste, considering that 60% of organizations run about half of their workloads in the cloud, according to Fortinet data.
From Speed to Scrutiny
The evolution marks a notable departure from the “Cloud First” directives that dominated IT strategy throughout much of the 2010s and early 2020s. During that period, CIOs faced pressure to demonstrate rapid data center exits and cloud adoption rates. Now, according to interviews with technology strategists and industry analysts, the conversation has shifted toward return on investment and architectural precision.
Joe Batista, a technology strategist who advises enterprise clients, distinguishes migration velocity and business value creation. “Speed without mass is just motion,” Batista says, arguing that many organizations achieved rapid cloud deployment without generating proportional business impact.
The reassessment is being driven by several factors converging this year. These include escalating AI infrastructure costs, growing data sovereignty requirements, and increased CFO scrutiny of technology budgets in an uncertain economic environment.
What is Cloud Smart?
While definitions vary, “cloud smart” generally refers to infrastructure placement decisions based on workload characteristics rather than predetermined migration targets. This treats public cloud, private cloud, edge computing, and on-premises infrastructure as complementary options rather than competing alternatives.
Forrester analyst commentary from 2025 characterized the shift in strategy as a “purposeful” multi-cloud deployment, in which workload distribution reflects performance, cost, and compliance requirements rather than vendor consolidation or migration momentum.
In practice, this might mean running customer-facing applications on public cloud infrastructure for global scalability, processing sensitive AI model training on private infrastructure to manage costs and intellectual property concerns, and handling latency-sensitive operations at the edge.
The AI Factor
Artificial intelligence workloads have emerged as a particular catalyst for infrastructure diversification, according to multiple sources. The combination of data gravity considerations, latency requirements, and token consumption costs has prompted some organizations to reconsider centralized public cloud AI deployments.
Forrester projections suggest that at least 15% of enterprises will shift toward private AI deployments atop private cloud infrastructure in 2026, primarily to address cost and operational risk concerns.
This has given rise to what some vendors and analysts are calling “neoclouds”—specialized infrastructure that combines private data center deployment with cloud-like operational characteristics.
Implementation Challenges
Technology advisors point to several areas where organizations are focusing their cloud innovation implementation efforts:
Workload Assessment: Some enterprises have established formal processes for evaluating where workloads should run, weighing technical requirements against business constraints. These assessment frameworks represent an evolution of earlier cloud centers of excellence, which primarily focused on accelerating migration rather than optimizing placement.
Cost Management: The financial operations (“FinOps”) discipline has gained prominence as organizations seek to address cloud waste. Practitioners in this area advocate for giving engineering teams real-time cost visibility and making cost optimization a standard engineering requirement alongside traditional metrics like uptime and latency.
AI Integration: Tim Crawford, a CIO advisor, observes a shift in how organizations approach AI deployment. Rather than pursuing broad AI implementation, he notes clients are focusing on specific, high-value use cases. “Instead of trying to apply it to everything, let’s figure out where we can really get the best value from it,” Crawford says.
Crawford also highlights emerging interest in what he calls “agentic” workflows—where AI systems autonomously interact with multiple business systems—rather than simpler chatbot implementations. This does require exposing legacy systems through APIs that AI agents can consume.
Security and Governance: As workloads distribute across multiple environments, some organizations are moving toward identity-based security models rather than traditional perimeter defenses. Crawford points to potential third-party authorization platforms that could manage governance across multicloud environments.
Performance Metrics: Beyond traditional uptime measurements, some CIOs are exploring metrics around innovation velocity—how quickly organizations can move from market signal to deployed product. This approach uses public cloud for rapid experimentation, then optimizes successful workloads by moving them to more cost-effective infrastructure when appropriate.
Industry Perspective Differs
The cloud-smart narrative has gained traction among technology advisors and some analysts, though perspectives vary on whether it represents a fundamental shift or simply the maturation of existing practices.
Proponents argue that “cloud smart” enables organizations to balance the elasticity and innovation pace of public cloud with the cost control and compliance benefits of private infrastructure. They cite waste statistics to argue that undisciplined migration has created inefficiencies that require correction.
Some practitioners note that workload repatriation—moving applications back from the public cloud to on-premises infrastructure—should be viewed as an optimization rather than a failure when supported by cost and performance data.
The strategy does require organizational capabilities that some enterprises may lack, including sophisticated workload assessment processes, real-time cost monitoring, and the ability to manage hybrid infrastructure complexity.
Looking Ahead
As organizations navigate their 2026 infrastructure strategies, the “cloud smart” framework appears to be influencing how technology leaders evaluate placement decisions. Whether this represents a lasting shift or a temporary recalibration remains to be seen.
What appears clear is that the conversation has evolved beyond migration velocity toward questions of architectural fit and financial return. In this environment, the infrastructure strategy that serves specific business requirements—rather than adhering to vendor preferences or industry mandates—is gaining currency among technology leadership.
The challenge for CIOs will be executing on more nuanced plans while managing the operational complexity that comes with distributed infrastructure architectures.
