Supermicro, Inc. (NASDAQ: SMCI) a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is previewing new, completely re-designed X14 server platforms which will leverage next-generation technologies to maximize performance for compute-intensive workloads and applications. Building on the success of Supermicro's efficiency-optimized X14 servers that launched in June 2024, the new systems feature significant upgrades across the board, supporting a never-before-seen 256 performance cores (P-cores) in a single node, memory support up for MRDIMMs at 8800MT/s, and compatibility with next-generation SXM, OAM, and PCIe GPUs. This combination can drastically accelerate AI and compute as well as significantly reduce the time and cost of large-scale AI training, high-performance computing, and complex data analytics tasks. Approved customers can secure early access to complete, full-production systems via Supermicro's Early Ship Program or for remote testing with Supermicro JumpStart.
"We continue to add to our already comprehensive Data Center Building Block solutions with these new platforms, which will offer unprecedented performance, and new advanced features," said Charles Liang, president and CEO of Supermicro. "Supermicro is ready to deliver these high-performance solutions at rack-scale with the industry's most comprehensive direct-to-chip liquid cooled, total rack integration services, and a global manufacturing capacity of up to 5,000 racks per month including 1,350 liquid cooled racks. With our worldwide manufacturing capabilities, we can deliver fully optimized solutions which accelerate our time-to-delivery like never before, while also reducing TCO."
These new X14 systems feature completely re-designed architectures including new 10U and multi-node form factors to enable support for next-generation GPUs and higher CPU densities, updated memory slot configurations with 12 memory channels per CPU and new MRDIMMs which provide up to 37% better memory performance compared to DDR5-6400 DIMMS. In addition, upgraded storage interfaces will support higher drive densities, and more systems with liquid cooling integrated directly into the server architecture.
The new additions to the Supermicro X14 family comprise more than ten new systems, several of which are completely new architectures in three distinct, workload-specific categories:
- GPU-optimized platforms designed for pure performance and enhanced thermal capacity to support the highest-wattage GPUs. System architectures have been built from the ground up for large-scale AI training, LLMs, generative AI, 3D media, and virtualization applications.
- High compute-density multi-nodes including SuperBlade® and the all-new FlexTwin™, which leverage direct-to-chip liquid cooling to significantly increase the number of performance cores in a standard rack compared to previous generations of systems.
- Market-proven Hyper rackmounts combine single or dual socket architectures with flexible I/O and storage configurations in traditional form factors to help enterprises and data centers scale up and out as their workloads evolve.
Supermicro X14 performance-optimized systems will support the soon-to-be-released Intel® Xeon® 6900 series processors with P-cores and will also offer socket compatibility to support Intel Xeon 6900 series processors with E-cores in Q1'25. This designed-in feature allows workload-optimized systems for either performance-per-core or performance-per-watt.
"The new Intel Xeon 6900 series processors with P-cores are our most powerful ever, with more cores and exceptional memory bandwidth & I/O to achieve new degrees of performance for AI and compute-intensive workloads," said Ryan Tabrah, VP and GM of Xeon 6 at Intel. "Our continued partnership with Supermicro will result in some of the industry's most powerful systems that are ready to meet the ever-heightening demands of modern AI and high-performance computing."
When configured with Intel Xeon 6900 series processors with P-cores, Supermicro systems support new FP16 instructions on the built-in Intel® AMX accelerator to further enhance AI workload performance. These systems include 12 memory channels per CPU with support for both DDR5-6400 and MRDIMMs up to 8800MT/s, CXL 2.0, and feature more extensive support for high-density, industry-standard EDSFF E1.S and E3.S NVMe drives.