
U.S. enterprises are accelerating the use of artificial intelligence across public cloud environments as they seek to modernize core operations while maintaining tighter control over costs, governance, and performance, according to new research from Information Services Group (ISG).
The findings, published in the 2025 ISG Provider Lens Multi Public Cloud Services report for the United States, suggest that AI is rapidly becoming a foundational element of enterprise cloud strategies rather than an experimental add-on.
The report indicates that organizations are increasingly relying on cloud-native AI tools to simplify application development, modernize data pipelines, and automate workflows. These tools are helping enterprises reduce development complexity and shorten time to value, particularly as they transition legacy systems into hybrid and multicloud architectures. Rather than centralizing AI workloads in a single environment, many companies are adopting distributed operating models that allow AI training, inference, and data preparation to run across a combination of public cloud and on-premises systems.
This shift reflects growing awareness of the operational and regulatory demands associated with AI at scale. Enterprises are balancing the need for flexibility with new requirements around compliance, data governance, and performance predictability. According to ISG, organizations are prioritizing architectures that allow them to support AI workloads without becoming overly dependent on any single cloud provider, while still maintaining consistent oversight and operational discipline.
One of the most significant trends identified in the report is the refinement of workload placement strategies. As AI consumption increases, U.S. enterprises are becoming more deliberate about where different stages of AI workloads are executed. Training jobs are often placed close to curated data sources to reduce data movement and egress costs, while inference workloads are positioned nearer to end users or machines to improve latency and reliability. This approach enables organizations to align capacity planning with developer workflows and data adjacency, improving both performance and cost efficiency.
Cost governance has emerged as a central concern as AI workloads place increasing pressure on cloud budgets. The report notes that enterprises are treating financial constraints as design parameters rather than afterthoughts. FinOps practices are being embedded directly into AI pipelines, allowing teams to monitor and manage costs in near real time. Techniques such as model quantization, selective caching, and moving preprocessing tasks from GPUs to CPUs are being widely adopted to reduce accelerator usage while preserving accuracy. These measures help organizations manage cost per transaction while meeting stringent reliability and latency targets.
Despite the rapid expansion of AI initiatives, ISG finds that generative AI adoption in the U.S. remains relatively cautious. Most enterprises are advancing only high-confidence, high-accuracy use cases into production environments. Structured proof-of-concept programs are common, with clear rules governing problem definition, testing methodologies, and ownership of data and risk. This disciplined approach allows organizations to limit exposure while building institutional knowledge and standardizing requirements for governance and oversight as deployments scale.
The report also highlights emerging trends beyond workload optimization and cost control. Cloud service providers are increasingly using agentic AI systems to support governance, orchestration, and operational decision-making within complex environments. At the same time, sustainability considerations are becoming more tightly integrated into cloud and AI planning, with enterprises incorporating energy efficiency and carbon metrics into resource allocation and cost models.
ISG’s analysis evaluates 63 providers across seven service categories, including consulting, managed services, FinOps, hyperscale infrastructure, and SAP HANA cloud services. Several global systems integrators and service providers were recognized as leaders across multiple quadrants, reflecting the breadth of capabilities required to support AI-driven cloud transformations. The report also identifies a group of rising providers with strong future potential and highlights LTIMindtree as the top global performer for customer experience in public cloud services for 2025, based on ISG’s Voice of the Customer research.
Taken together, the findings suggest that U.S. enterprises are moving beyond early AI experimentation toward more mature, governed, and economically sustainable models of adoption. As AI becomes deeply embedded in cloud strategies, success increasingly depends on disciplined execution, precise workload placement, and the ability to balance innovation with control.
Executive Insights FAQ
Why are U.S. enterprises expanding AI use in public clouds?
They are seeking faster innovation, improved productivity, and scalable infrastructure while leveraging cloud-native AI capabilities.
How are organizations managing the cost of AI workloads?
By integrating FinOps practices, optimizing GPU usage, and carefully placing workloads to reduce unnecessary data movement.
Why is workload placement becoming more important?
Proper placement improves performance, lowers latency, and helps control cloud egress and infrastructure costs.
Is generative AI widely deployed in production today?
Adoption remains selective, with enterprises prioritizing high-accuracy use cases and structured proof-of-concept programs.
What role do service providers play in AI-driven cloud adoption?
They help enterprises move from experimentation to production with standardized governance, optimization, and operational models.
