
Rocky Linux has become the first enterprise Linux distribution authorized to deliver NVIDIA’s complete AI and networking software stack out of the box, according to CIQ, marking a major step forward for organizations deploying GPU-accelerated workloads across AI, high-performance computing, and cloud-native environments with faster, fully validated infrastructure.
The move marks a milestone in enterprise and cloud-native computing, positioning Rocky Linux from CIQ (RLC) and Rocky Linux from CIQ – AI (RLC-AI) as turnkey platforms for organizations running large-scale GPU-accelerated workloads in AI, HPC, and scientific computing.
With the integration of NVIDIA’s DOCA OFED alongside the CUDA Toolkit, the company behind Rocky Linux – CIQ, claims that RLC and RLC-AI are now the first Linux distributions licensed and validated to include NVIDIA’s full AI and networking software ecosystem. This integration enables developers and enterprises to move from installation to operational AI inference up to nine times faster, based on CIQ’s internal benchmarks.
Rocky Linux, originally founded as an open-source alternative to CentOS, has rapidly evolved into a preferred foundation for high-performance, enterprise-grade computing. CIQ’s enhanced offering represents a shift from traditional open-source distributions toward fully validated platforms that can handle GPU-accelerated and multi-node workloads at scale. For enterprises, the implications are significant: a single, ready-to-run environment that eliminates the time-consuming process of manually installing and validating GPU drivers, libraries, and network interfaces.
Modern AI and high-performance workloads are increasingly constrained not by hardware, but by the complexity of deploying and scaling GPU-enabled software environments. As organizations expand from single-node experimentation to clusters with thousands of GPUs, challenges arise around driver compatibility, network optimization, and security compliance. Enterprise-grade servers such as Dell’s PowerEdge XE9680 and HPE’s Cray XD systems depend on technologies like DOCA OFED, RDMA, and IDPF to maintain efficient GPU-to-GPU communication across nodes – all of which are now supported natively within CIQ’s Rocky Linux distributions.
By delivering pre-built, validated images that include CUDA, DOCA, and all supporting dependencies, RLC and RLC-AI would allow enterprises to reduce environment setup time from 30 minutes to just three. The result is a “download-and-deploy” model that transforms Rocky Linux from a bare-metal operating system into what CIQ describes as a “developer appliance” for AI infrastructure.
Demand for AI-Ready Infrastructure
Gregory Kurtzer, founder and CEO of CIQ, said the company’s goal is to remove friction between developers and accelerated computing environments. “If you’re building applications that leverage accelerated computing, Rocky Linux from CIQ is now the obvious choice,” said Mr. Kurtzer. “We’ve removed every barrier between developers and GPU performance. With the complete, validated NVIDIA stack integrated directly into Rocky Linux from CIQ, teams can focus entirely on innovation rather than infrastructure.”
For enterprise users, the integration would bring several measurable advantages. Pre-configured environments deliver faster time-to-productivity and reduce troubleshooting, while validated compatibility across hardware and networking layers simplifies scaling. According to CIQ, these optimizations not only accelerate deployment but also lower total cost of ownership by reducing configuration errors, security risks, and downtime. Fully signed drivers and secure boot support also address one of the most persistent security challenges in GPU infrastructure: maintaining compliance in tightly controlled IT environments.
The company emphasizes that this development is not limited to research or AI startups. By integrating NVIDIA’s CUDA and DOCA frameworks directly into Rocky Linux, CIQ may enable broad adoption of GPU-accelerated computing across finance, manufacturing, telecommunications, and cloud service providers. Enterprise customers deploying NVIDIA GPUs and ConnectX networking solutions can now rely on a fully certified Linux base image designed for high-performance clusters and multi-tenant environments.
This announcement comes as demand for AI-ready infrastructure continues to outpace hardware availability, with enterprises seeking ways to reduce the software friction involved in scaling large GPU clusters. Analysts note that the inclusion of NVIDIA’s networking stack within a Linux distribution is a significant technical and strategic milestone, potentially changing how organizations build out their AI and HPC environments.
CIQ plans to showcase the enhanced Rocky Linux AI platform at KubeCon + CloudNativeCon North America (November 10–13, 2025) and SC25 (November 16–21, 2025), alongside partners demonstrating validated reference architectures built with NVIDIA AI infrastructure, ConnectX SuperNICs, and BlueField DPUs.

