DenseMAX Appliance

An enterprise-grade AI server engineered for training, fine-tuning, and high-throughput inference. Pack up to 8 NVIDIA Blackwell GPUs and scale confidently with expansive PCIe Gen5 I/O, redundant power, and datacenter-ready manageability.

Compute

Dual AMD EPYC™ 9005/9004/97x4

Accelerators

Up to 8× RTX 5090 / RTX PRO 6000 Blackwell

Memory

24× DDR5 ECC RDIMM

Power

5× 2000W

Networking

400 Gbps IB

Memory

24× DDR5 ECC RDIMM

Invergent: Densemax Appliance

Dual EPYC Compute

2× AMD EPYC™ 9005/9004/97x4 processors deliver exceptional core density and memory bandwidth.

8× GPU Ready

Eight PCIe Gen5 x16 slots support up to 8× NVIDIA RTX 5090 or RTX PRO 6000 Blackwell GPUs.

High-Capacity Memory

24× DDR5 ECC RDIMM slots keep large models and batches in-memory for faster throughput.

NVMe for Speed

2× internal M.2 (PCIe 3.0x4) and 4× U.2 bays (PCIe 5.0x4) for blazing-fast datasets and checkpoints.

400 Gbps Fabric

1× 400 Gbps InfiniBand, plus 2× PCIe 5.0×8 networking slots (mergeable to ×16) for scale-out clusters.

Industrial Power

5× 2000 W PSUs for reliability under heavy mixed training/inference loads.

Technical Specifications

Everything you need for state-of-the-art generative AI workloads.

DenseMAX Studio — Pre-installed

Your appliance arrives with DenseMAX Studio pre-installed for rapid time-to-value: project templates, model serving, fine-tuning pipelines, evaluation, guardrails, observability, and a collaborative data/model hub — all optimized for NVIDIA GPUs.

Get a guided tour

Frequently Asked Questions

Ready to deploy enterprise-grade AI on-prem?

Request a guided demo or talk to our team about configurations, pricing, and delivery.