At a recent launch, Data Dynamics (DDN) officially launched its latest Infinia 2.0 object storage system, designed for artificial intelligence (AI) training and reasoning. According to the company, Infinia2.0 can achieve up to 100 times of AI data acceleration and improve the cost efficiency of cloud data centers by up to 10 times. This breakthrough technology has quickly attracted widespread attention from the industry.
"85 of the world's top 500 companies are using DDN's data intelligence platform to run their AI and high-performance computing (HPC) applications. The launch of Infinia 2.0 will help customers achieve faster model training and real-time insights in data analytics and AI frameworks, while ensuring future adaptability to GPU efficiency and energy consumption." He stressed that this system will provide enterprises with stronger competitiveness in the AI field.

Paul Bloch, co-founder and president of DDN, also added: "Our platform has been put into use in some of the world's largest AI factories and cloud environments, fully demonstrating its ability to support critical AI operations." It is worth mentioning that Elon Musk's xAI is also one of DDN's customers, which further highlights the importance of Infinia2.0 in the industry.
In the Infinia2.0 design, AI data storage is the core. "AI workloads require real-time data intelligence to eliminate bottlenecks, accelerate workflows, and seamlessly scale in complex model enumeration, pre-training and post-training, enhanced generation (RAG), Agentic AI, and multimodal environments," said Sven Oehme, Chief Technology Officer. "Infinia2.0 aims to maximize the value of AI while providing real-time data services, efficient multi-tenant management, intelligent automation and a powerful AI native architecture.
The system has a number of advanced features, including event-driven data mobility, multi-tenant support, hardware-independent design, etc., which can ensure 99.999% uptime and achieve up to 10 times the always-on data reduction, fault-tolerant network erase encoding and automated quality of service (QoS). In addition, Infinia2.0 is combined with Nvidia's Nemo, NIMS microservices, GPU, Bluefield3 DPU and Spectrum-X networks, further accelerating the efficiency of AI data pipelines.
DDN claims that Infinia2.0 can reach the TBps level with latency below milliseconds, and its performance far exceeds AWS S3 Express. According to independent benchmarks, Infinia2.0 achieves 100 times improvements in AI data acceleration, AI workload processing speed, metadata processing and object list processing, and is 25 times faster in AI model training and inference query speeds. These impressive parameters make it a leading solution in the AI field.
The Infinia2.0 system supports scale-up from TB to EB, and can support over 100,000 GPUs and 1 million simultaneous clients, providing a solid foundation for large-scale AI innovation. DDN emphasizes that its systems perform well in real data center and cloud deployments, enabling unparalleled efficiency and cost savings from 10 to over 100,000 GPUs.
"By combining Infinia2.0, DDN's data intelligence platform, with Supermicro's high-end server solutions, the two companies have collaborated to build one of the world's largest AI data centers," said Charles Liang, CEO of Supermicro. This partnership may be related to the expansion of xAI's Colossus data centers, further driving the development of AI infrastructure.