AMD InstinctTM MI300X GPU

The AMD Instinct MI300X GPUs are a leading-edge discrete GPU accelerator designed to deliver leadership performance and efficiency for demanding AI and HPC applications.

Available now on Crusoe Cloud.

13.7
x
AI/ML PERFORMANCE
13.7
x
AI/ML PERFORMANCE

Built on next-gen AMD CDNAâ„¢ architecture, MI300X delivers up to 13.7X peak AI/ML performance compared to AMD Instinct MI250X.

192
GB HBM3 MEMORY CAPACITY
192
GB HBM3 MEMORY CAPACITY

HBM3 memory is supported with 5.3 TB/s of local bandwidth and direct connectivity of 128 GB/s bidirectional bandwidth between each GPU.

304
GPU COMPUTE UNITS
304
GPU COMPUTE UNITS

Accelerator designed with 304 high-throughput compute units, AI-specific functions like new data-type support, photo and video decoding, and significant memory.

Ideal workloads for AMD Instinct MI300X GPUs

High-performance computing (HPC)

Power the top two supercomputers with the AMD ROCm open software platform, which supports HPC apps in fields like astrophysics, climate and weather modeling, computational fluid dynamics, genomics, and molecular dynamics.

Generative AI and LLMs

Train massive AI models with high-speed, ultra-low latency inference for demanding applications like natural language processing (NLP) and computer vision.

AI Framework deployment

The AMD ROCm open software environment makes it easy to deploy accelerated AI servers with a broad range of AI support for leading compilers, libraries, and models.

Build the future faster with Crusoe Cloud

Up to 20 times faster and 81% less expensive than traditional cloud providers.

Unparalleled performance

Scale ambitiously and fearlessly knowing your AI workloads are backed by peerless infrastructure.

Accelerated breakthroughs

Focus on innovation not infrastructure with intuitive, AI-native systems designed to keep you building.

Abundant compute

Achieve your AI potential with vast compute and energy resources.
Crusoe provides the latest NVIDIA hardware, extraordinary support, and overall a great partner to enable our data, training, and inference infrastructure. Crusoe is consistently accommodating to our needs and is not afraid to dive deep in helping us solve hard problems.
Philip Petrakian
Member of Technical Staff
Crusoe has played an important role in scaling our GPU workloads, enabling us to train large language models efficiently and reliably. With Crusoe, our training jobs were able to run on 100s of GPUs for several weeks to months duration. We're delighted with the level of support that we received.
Alex Smola
CEO
Every founder in AI should work with Crusoe. They move as fast as a small startup, but provide the robust, scaled infrastructure necessary to serve a real-time model to hundreds of thousands of happy users.
Oliver Cameron
CEO
With Crusoe, we scaled our capacity 5x within hours to serve all of our Oasis users across Europe. This enabled Oasis to seamlessly scale to over 2 million users in just 4 days.
Dean Leitersdorf
Co-Founder & CEO
Crusoe Cloud's is highly collaborative in incorporating our feedback and requests into their feature and product roadmap.
Aleks Kamko
AI Researcher
Crusoe has been an outstanding partner from the get-go - all of our in-house machine learning models have been trained on Crusoe Cloud. They provide a level of quality of service, responsiveness, and support for early access programs that we couldn't find with any other cloud provider.
Prasanth Veerina
Co-Founder
Windsurf's NVIDIA H100 Tensor Core GPUs on Crusoe have been incredibly reliable with a cluster uptime of 99.98%.
Varun Mohan
Co-Founder & CEO

Are you ready to build something amazing?