Just two months ago we wrote, “One does not simply buy a supercomputer.” But in the rapidly changing world of IT, what was true in September may no longer be true in November.
Today at SC19, the supercomputing conference being held in Denver, data center provider ScaleMatrix introduced appliances that can deliver up to 13 petaFLOPS of performance. With cooling built in, they’re plug-and-play out of the box and don’t need to be housed in a specially designed data center.
This could be a game changer. Most high performance computing systems required for running machine learning and other AI workloads can’t be located in a typical data center without major modifications to the facility’s power distribution and cooling system. Being GPU-intensive, HPC systems can bring rack density up to about 30kW per rack, at least five times higher than the average data center load of 3kW to 5kW per rack.
But ScaleMatrix’s new appliance is self-sufficient.
“All we need is a roof, floor space, and a place to plug the appliance in, and we can turn on an enterprise-class data center capable of supporting a significant artificial intelligence or high performance computing workload,” Chris Orlando, ScaleMatrix’s co-founder and CEO, told DCK.
Called “AI Anywhere,” the product was developed in a three-way collaboration between ScaleMatrix, which operates high-density colocation data centers for AI workloads in Houston and San Diego, chipmaker Nvidia, and Microway, a provider of computer clusters, servers, and workstations for HPC and AI. They’re available in two single-rack versions, each employing one of Nvidia’s two DGX supercomputer models, designed specifically for machine learning and AI workloads.
One model contains 13 DGX-1 units, delivering a payload of 13 petaFLOPS, with the other containing four DGX-2 systems, delivering 8 petaFLOPS. Both units adhere to DGX-POD reference architecture designs (Nvidia’s design for…