
With the establishment of a new AI supercluster in the US and a significant extension of its collaboration with AMD, Vultr is intensifying its participation in the global battle for AI infrastructure. The privately held cloud infrastructure provider plans to deploy 24,000 additional AMD Instinct MI355X GPUs at a new 50 MW data center campus in Springfield, Ohio, positioning the site as a key hub for training and inference at scale.
The Springfield deployment will be one of Vultr’s largest GPU build-outs to date and is marketed as delivering “unprecedented performance per dollar” for AI workloads. For enterprises and AI-native companies under pressure to secure high-performance compute at predictable economics, this kind of cost-performance positioning is increasingly central to vendor selection.
Vultr has been an early adopter of AMD’s data center GPU roadmap, previously rolling out AMD Instinct MI325X and MI355X GPUs across its cloud platform. The new expansion deepens that engagement and signals a long-term bet on AMD as a strategic counterweight to competing GPU providers in the AI infrastructure market. AMD, for its part, is using partnerships like this to prove that its Instinct accelerators can support demanding real-world workloads at cloud scale.
Vultr and AMD both frame the supercluster as part of a broader full-stack collaboration rather than a GPU-only story. In addition to Instinct accelerators, Vultr is aligning with AMD across the infrastructure stack, including AMD EPYC 4005 Series processors and its own VX1 Cloud Compute offering. The combination is designed to give customers a consistent platform for both CPU-bound and GPU-bound workloads, whether they are building traditional cloud-native applications, large-scale AI models, or hybrid architectures.
Looking ahead, Vultr plans to extend its use of AMD Instinct GPUs by adopting the upcoming AMD Instinct MI450 series and integrating AMD’s “Helios” rack-scale infrastructure into future AI clusters. Helios is AMD’s rack-scale AI architecture built around EPYC CPUs, Instinct GPUs, Pensando networking, and the ROCm open software stack, designed to support large GPU clusters over Ethernet fabrics. For Vultr, aligning with Helios suggests an ambition to standardize on an open, rack-scale design for its next-generation AI environments.
The strategic rationale is clear: as demand for AI compute accelerates, cloud providers that can bring hyperscale capacity online quickly – and at attractive economics – stand to win business from both born-in-the-cloud AI startups and established enterprises. Vultr CEO J.J. Kardwell emphasizes speed and reach as key differentiators, arguing that building out racked GPU capacity at scale enables customers to bring next-generation AI applications to market faster.
AMD executives see the collaboration as a proof point for their efforts to challenge entrenched incumbents in data center AI. By partnering with providers like Vultr, AMD can showcase large-scale deployments that stress test its hardware and software under diverse customer workloads, from generative AI to more traditional machine learning and data analytics.
The Springfield campus also has a regional economic and strategic dimension. The project is supported by the Ohio Governor’s Office, the Department of Development, JobsOhio, the Dayton Development Coalition, the Greater Springfield Partnership, and the City of Springfield. For Ohio, the facility reinforces its positioning as a growing hub for AI, cloud, and digital infrastructure investments, complementing broader U.S. efforts to distribute compute capacity beyond legacy coastal hubs. For customers, a Midwest location can offer advantages in latency, resilience, and geographic diversification for multi-region architectures.
For B2B technology leaders evaluating AI infrastructure partners, the Vultr–AMD development highlights several converging trends: the rise of alternative GPU ecosystems, the importance of cost-performance in cloud AI, the shift toward rack-scale designs, and the growing role of secondary markets like Ohio in hyperscale deployments. While the AI supercluster will compete in a crowded field of GPU clouds and AI-focused providers, its scale and reliance on AMD’s platform give it a distinct profile in a market still dominated by a small handful of vendors.
Executive Insights FAQ
What is the significance of Vultr’s new AI supercluster in Springfield, Ohio?
It marks one of Vultr’s largest GPU deployments to date, adding 24,000 AMD Instinct MI355X GPUs in a 50 MW campus designed specifically to deliver high-performance AI training and inference capacity with a strong emphasis on performance per dollar.
How does this collaboration fit into AMD’s broader AI strategy?
The project demonstrates AMD Instinct GPUs at large cloud scale and reinforces AMD’s effort to establish its accelerators, CPUs, and open ROCm software stack as a credible full-stack alternative for demanding AI workloads, rather than focusing solely on chip-level benchmarks.
What does Vultr gain by standardizing on AMD’s AI platform roadmap?
Vultr gains access to a consistent family of accelerators and CPUs, including future MI450 GPUs and Helios rack-scale designs, enabling it to build scalable, Ethernet-based GPU clusters and differentiate on cost, openness, and flexibility versus more proprietary AI infrastructure offerings.
Why is the location in Ohio strategically important?
The Springfield site expands Vultr’s footprint in the U.S. Midwest, backed by state and local economic development agencies, and supports regional diversification of AI compute, which can improve latency, resilience, and regulatory alignment for customers deploying multi-region architectures.
How does this impact enterprises looking for AI infrastructure?
Enterprises gain an additional high-density, AMD-based option for cloud AI infrastructure, potentially with more favorable cost-performance and ecosystem diversity, which can help mitigate vendor concentration risk while still providing the scale needed for modern AI development and deployment.


