NVIDIA GB200 NVL72

The NVIDIA GB200 NVL72 is a revolutionary liquid-cooled, rack-scale system designed to solve the most complex AI challenges.

Available now on Crusoe Cloud.

30
x
FASTER INFERENCE*
30
x
FASTER INFERENCE*

Second-generation Transformer Engine introduces FP4 AI, new Tensor Cores, and new microscaling formats for 30X faster real-time trillion-parameter LLM inference.

4
x
FASTER TRAINING*
4
x
FASTER TRAINING*

Second-generation Transformer Engine featuring FP8 precision enables up to 4X faster training for LLMs at scale.

25
x
more ENERGY efficient*
25
x
more ENERGY efficient*

Liquid-cooled GB200 NVL72 racks deliver 25X more performance at the same power compared to NVIDIA H100 air-cooled infrastructure.

6
x
FASTER DATA PROCESSING*
6
x
FASTER DATA PROCESSING*

High-bandwidth memory, NVLinkâ„¢-C2C, and Blackwell's new Decompression Engine delivers up to 6X faster database queries.

*Compared to NVIDIA H100 GPU

Ideal workloads for the NVIDIA GB200 NVL72

Advanced intelligent assistants

Accelerated compute, fast memory bandwidth, low-latency, and high-throughput is game changing for inference apps requiring real-time responsiveness or rapid iteration like large context text generation, multimodal generative models, and agentic AI reasoning.

Enterprise data and analytics tasks

Blackwell's Decompression Engine speeds up key database queries and complex data processing tasks, making it easier to do queries over large compressed data stores or transform large volumes of data in real time.

Large-scale simulations

Low-latency interconnects and unified memory enhances better communication between GPUs, lowering the overhead of synchronization needed for simulations across industries like autonomous vehicles, healthcare and life sciences, and more.

Build the future faster with Crusoe Cloud

Up to 20 times faster and 81% less expensive than traditional cloud providers.

Unparalleled performance

Scale ambitiously and fearlessly knowing your AI workloads are backed by peerless infrastructure.

Accelerated breakthroughs

Focus on innovation not infrastructure with intuitive, AI-native systems designed to keep you building.

Abundant compute

Achieve your AI potential with vast compute and energy resources.
Crusoe has played an important role in scaling our GPU workloads, enabling us to train large language models efficiently and reliably. With Crusoe, our training jobs were able to run on 100s of GPUs for several weeks to months duration. We're delighted with the level of support that we received.
Alex Smola
CEO
Crusoe provides the latest NVIDIA hardware, extraordinary support, and overall a great partner to enable our data, training, and inference infrastructure. Crusoe is consistently accommodating to our needs and is not afraid to dive deep in helping us solve hard problems.
Philip Petrakian
Member of Technical Staff
Crusoe Cloud's is highly collaborative in incorporating our feedback and requests into their feature and product roadmap.
Aleks Kamko
AI Researcher
Windsurf's NVIDIA H100 Tensor Core GPUs on Crusoe have been incredibly reliable with a cluster uptime of 99.98%.
Varun Mohan
Co-Founder & CEO
With Crusoe, we scaled our capacity 5x within hours to serve all of our Oasis users across Europe. This enabled Oasis to seamlessly scale to over 2 million users in just 4 days.
Dean Leitersdorf
Co-Founder & CEO
Crusoe has been an outstanding partner from the get-go - all of our in-house machine learning models have been trained on Crusoe Cloud. They provide a level of quality of service, responsiveness, and support for early access programs that we couldn't find with any other cloud provider.
Prasanth Veerina
Co-Founder
Every founder in AI should work with Crusoe. They move as fast as a small startup, but provide the robust, scaled infrastructure necessary to serve a real-time model to hundreds of thousands of happy users.
Oliver Cameron
CEO

Are you ready to build something amazing?