Financial firm Block, Inc. is planning to deploy a Nvidia DGX SuperPOD with DGX GB200 systems.
According to the company formerly known as Square, Inc., this will be the first such deployment in North America.
The liquid-cooled GB200 SuperPod features 36 GB200 Superchips per rack, with each GB200 made out of one Grace CPU and two Blackwell GPUs.
Each GB200 is connected as one with fifth-generation Nvidia NVLink, delivering 1.4 exaFLOPS of AI performance, 30 terabytes (TB) of fast memory, and 130 terabytes per second (TBps) of bidirectional GPU bandwidth.
“The industry, and the world, is undergoing a seismic shift with adopting AI tools. At Block, we think it’s essential not only to apply AI to existing problems, but also to explore, learn, and build in the open so that we can advance the frontier of AI in a way that truly levels the playing field for our customers and community,” said Dhanji R. Prasanna, CTO of Block.
“We’re excited to deploy an Nvidia Grace Blackwell DGX SuperPOD and start exploring novel solutions for our customers. We’re committed to an open source approach, sharing our learnings and results along the way.”
“As AI models grow in complexity and scale, businesses need powerful infrastructure that can match the pace of innovation,” added Charlie Boyle, vice president, DGX platforms, Nvidia. “With Nvidia DGX GB200 systems, Block engineering and research teams can develop frontier open source AI models that can tackle complex, real-world challenges with state-of-the-art AI supercomputing.”
Before the decision to deploy a Nvidia DGX SuperPOD GB200 system, Block used the Lambda 1-Click Clusters - now available with Blackwell GPUs - to trial its hypotheses with hundreds of interconnected Nvidia GPUs.
The cluster will be deployed in an unspecified Equinix data center which is "AI-ready." DCD has contacted Block for more information.
“Frontier models represent the cutting edge of artificial intelligence technology, pushing the boundaries of what AI can achieve, and they require the latest in AI chips — like Nvidia's new DGX SuperPOD,” said Jon Lin, chief business officer at Equinix. “By deploying at Equinix’s neutral, cloud-adjacent platform, companies like Block can unlock expanded compute scale and flexibility. This enables the customization of AI solutions with a choice of infrastructure, cloud, models, and cooling at our neutral exchange.”
Block, Inc. is an American technology and financial services company. It was founded by CEO Jack Dorsey in 2009 initially a provider of point-of-sale systems. In 2024, the company began developing bitcoin mining chips, and signed an agreement with CoreScientic in July 2024 to provide the latter with hardware.
In November 2024, DeepL announced it would be deploying a Nvidia DGX SuperPod with GB200 racks at EcoDataCenter's facility in Sweden.
Rollouts of the GB200 GPUs are massively ramping up. This month saw OpenAI and Oracle announce plans to deploy 64,000 Nvidia GB200s at the Stargate data center in Abilene, Texas by the end of 2026. This week alone CoreWeave announced it will deploy a cluster of the chips at a Bulk Infrastructure data center in Norway, and Eviden and Supermicro entered into a strategic collaboration to distribute Supermicro’s Nvidia GB200 NVL72 AI SuperCluster offering across Europe, India, the Middle East, and South America.