Altair provides NVIDIA-powered GPU solutions for small to large computing requirements in different application frameworks, utilizing the full range of NVIDIA core technologies. Selection of the most appropriate GPU solution in a user’s context is a key requirement for business success.

GPUs

GPUs

GPUs offer the compute power of up to 100 CPUs in a single GPU, enabling scientists to solve problems that were once thought impossible.

Learn More

19

19" Rackmount Server

Inspired by the demands of deep learning and analytics, NVIDIA rackmount systems are essential instruments for AI research, hosted in a data centre to serve teams of researchers.

Learn More

Workstations

Workstations

NVIDIA-powered data science workstations give data scientists everything they need to tackle complex workflows in any location and without requiring data centre installations.

Learn More

GPU Model Information

NVIDIA A30

Peak FP64
5.2TF
Peak FP64 Tensor Core
10.3 TF
Peak FP32
10.3 TF
TF32 Tensor Core
82 TF | 165 TF
BFLOAT16 Tensor Core
165 TF | 330 TF
Peak FP16 Tensor Core
165 TF | 330 TF
Peak INT8 Tensor Core
330 TOPS | 661 TOPS
Peak INT4 Tensor Core
661 TOPS | 1321 TOPS
Media engines
1 optical flow accelerator (OFA)
1 JPEG decoder (NVJPEG)
4 Video decoders (NVDEC)
GPU Memory
24GB HBM2
GPU Memory Bandwidth
933GB/s
Interconnect
PCIe Gen4: 64GB/s Third-gen NVIDIA® NVLINK® 200GB/s
Form Factor
2-slot, full height, full length (FHFL)
Max thermal design power (TDP)
165W
Multi-Instance GPU (MIG)
4 MIGs @ 6GB each
2 MIGs @ 12GB each
1 MIGs @ 24GB
Virtual GPU (vGPU) software support
NVIDIA AI Enterprise for VMware
NVIDIA Virtual Compute Server

NVIDIA A40

GPU architecture
NVIDIA Ampere architecture
GPU memory
48 GB GDDR6 with ECC
Memory bandwidth
696 GB/s
Interconnect interface
NVIDIA® NVLink® 112.5 GB/s (bidirectional)3 PCIe Gen4 31.5 GB/s (bidirectional)
NVIDIA Ampere architecture based CUDA Cores
10.752
NVIDIA second-generation RT Cores
84
NVIDIA third-generation Tensor Cores
336
Peak FP32 TFLOPS (non-Tensor)
37.4
Peak FP16 Tensor TFLOPS with FP16 Accumulate
149.7 | 299.4
Peak TF32 Tensor TFLOPS
74.8 | 149.6
RT Core performance TFLOPS
73.1
Peak BF16 Tensor TFLOPS with FP32 Accumulate
149.7 | 299.4
Peak INT8 Tensor TOPS Peak INT 4 Tensor TOPS
299.3 | 598.6
598.7 | 1,197.4
Form factor
4.4" (H) x 10.5" (L) dual slot
Display ports
3x DisplayPort 1.4; Supports NVIDIA Mosaic and Quadro® Sync4
Max power consumption
300 W
Power connector
8-pin CPU
Thermal solution
Passive
Virtual GPU (vGPU) software support
NVIDIA vPC/vApps, NVIDIA RTX Virtual Workstation, NVIDIA Virtual Compute Server
vGPU profiles supported
See the Virtual GPU Licensing Guide
NVENC | NVDEC
1x | 2x (includes AV1 decode)
Secure and measured boot with hardware root of trust
Yes
NEBS ready
Level 3
Compute APIs
CUDA, DirectCompute, OpenCL™, OpenACC®
Graphics APIs
DirectX 12.075 , Shader Model 5.175 , OpenGL 4.686 , Vulkan 1.186
MIG support
No

NVIDIA A100

Peak double precision
up to 9.7 TeraFLOPS (FP64)
Peak single precision
up to 19.5 TeraFLOPS (FP32)
Number of GPUs
1x GA100
Deep Learning
up to 312 TeraFLOPS
Number of CUDA cores
6912
Number of Tensor cores
432
Memory size per board (HBM2)
40GB at 1.6 TB/s
Max Power
250W (PCIe)/400W (SXM-4)
Features
Ampere Architecture, Multi-instance GPU (MIG) technology, Third generation Tensor Core technology
System
Servers NVLINK / PCIe Gen4 / SXM-4
Cooling
Passive

NVIDIA QUADRO GV100

Peak double precision
Up to 7.4 TeraFLOPS
Peak single precision
Up to 14.8 TeraFLOPS
Deep Learning
Up to 118.5 TeraFLOPS
Number of GPUs
1x VP100
Number of CUDA cores
5120
Number of Tensor cores
640
Memory size per board (HBM2)
32GB at 870 GB/s
Max Power
250W
Features
Volta Architecture pairing NVIDIA® CUDA® and Tensor Cores, Second-Generation NVIDIA® NVLink
System
Workstation
Cooling
Active

NVIDIA RTX A4000

GPU memory
16 GB GDDR6
Memory interface
256-bit
Memory bandwidth
448 GB/s
Error-correcting Code (ECC)
Yes
NVIDIA Ampere architecture based CUDA Cores
6.144
NVIDIA third-generation Tensor Cores
192
NVIDIA second-generation RT Cores
48
Single-precision performance
19.2 TFLOPS
RT Core performance
37.4 TFLOPS
Tensor performance
153.4 TFLOPS
System interface
PCI Express 4.0 x16
Power consumption
Total board power: 140 W
Thermal solution
Active
Form factor
4.4” H x 10.5” L, single slot
Display connectors
4x DisplayPort 1.4a
Max simultaneous displays
4x 4096 x 2160 @ 120 Hz
4x 5120 x 2880 @ 60 Hz
2x 7680 x 4320 @ 60 Hz
Power connector
1x 6-pin PCIe
Encode/decode engines
1x encode, 1x decode (+AV1 decode)
VR ready
Yes
Graphics APIs
DirectX 12.07, Shader Model 5.17, OpenGL 4.68, Vulkan 1.2
Compute APIs
CUDA, DirectCompute, OpenCL

NVIDIA RTX A5000

GPU memory
24 GB GDDR6
Memory interface
384-bit
Memory bandwidth
768 GB/s
Error-correcting Code (ECC)
Yes
NVIDIA Ampere architecture based CUDA Cores
8.192
NVIDIA third-generation Tensor Cores
256
NVIDIA second-generation RT Cores
64
Single-precision performance
27.8 TFLOPS
RT Core performance
54.2 TFLOPS
Tensor performance
222.2 TFLOPS
NVIDIA NVLink
Low profile bridges connect two NVIDIA RTX A5000 GPUs
NVIDIA NVLink bandwidth
112.5 GB/s (bidirectional)
System interface
PCI Express 4.0 x16
Power consumption
Total board power: 230 W
Thermal solution
Active
Form factor
4.4” H x 10.5” L, dual slot, full height
Display connectors
4x DisplayPort 1.4a7
Max simultaneous displays
4x 4096 x 2160 @ 120 Hz
4x 5120 x 2880 @ 60 Hz
2x 7680 x 4320 @ 60 Hz
Power connector
1x 8-pin PCIe
Encode/decode engines
1x encode, 2x decode (+AV1 decode)
VR ready
Yes
vGPU software support
NVIDIA vPC/vApps, NVIDIA RTX Virtual Workstation, NVIDIA Virtual Compute Server
vGPU profiles supported
See the Virtual GPU Licensing Guide
Graphics APIs
DirectX 12.078, Shader Model 5.17, OpenGL 4.68, Vulkan 1.2
Compute APIs
CUDA, DirectCompute, OpenCL

NVIDIA RTX A6000

GPU memory
48 GB GDDR6
Memory interface
384-bit
Memory bandwidth
768 GB/s
Error-correcting Code (ECC)
Yes
NVIDIA Ampere architecture based CUDA Cores
10.752
NVIDIA third-generation Tensor Cores
336
NVIDIA second-generation RT Cores
84
Single-precision performance
38.7 TFLOPS
RT Core performance
75.6 TFLOPS
Tensor performance
309.7 TFLOPS
NVIDIA NVLink
Connects two NVIDIA RTX A6000 GPUs
NVIDIA NVLink bandwidth
112.5 GB/s (bidirectional)
System interface
PCI Express 4.0 x16
Power consumption
Total board power: 300 W
Thermal solution
Active
Form factor
4.4” H x 10.5” L, dual slot, full height
Display connectors
4x DisplayPort 1.4a9
Max simultaneous displays
4x 4096 x 2160 @ 120 Hz, 4x 5120 x 2880 @ 60 Hz, 2x 7680 x 4320 @ 60 Hz
Power connector
1x 8-pin CPU
Encode/decode engines
1x encode, 2x decode (+AV1 decode)
VR ready
Yes
vGPU software support
NVIDIA vPC/vApps, NVIDIA RTX Virtual Workstation, NVIDIA Virtual Compute Server
vGPU profiles supported
1 GB, 2 GB, 3 GB, 4 GB, 6 GB, 8 GB, 12 GB, 16 GB, 24 GB, 48 GB
Graphics APIs
DirectX 12.0710, Shader Model 5.1710, OpenGL 4.6811, Vulkan 1.181
Compute APIs
CUDA, DirectCompute, OpenCL

NVIDIA TESLA® V100

Peak double precision
up to 7.8 TeraFLOPS
Peak single precision
up to 15.7 TeraFLOPS
Deep Learning
up to 125 TeraFLOPS
Number of GPUs
1x VP100
Number of CUDA cores
5120
Number of Tensor cores
640
Memory size per board (HBM2)
16GB/32GB at 900 GB/s
Max Power
250W (PCIe)/300W (SXM-2)
Features
Volta Architecture pairing NVIDIA® CUDA® and Tensor Cores to deliver the performance of an AI supercomputer in a single GPU
System
Servers NVLINK / PCIe Gen3 / SXM-2
Cooling
Passive

19" Rackmount Server Model Information

NVIDIA® DGX A100 320GB

GPU
8x NVIDIA A100 Tensor Core 40GB (Total 320GB, 5 petaFLOPS AI)
CPU
2x 64-core AMD Rome 7742 2.25GHz (Total 128 cores, 3.4GHz max boost)
RAM
1TB
Storage
OS: 2x 1.92TB NVME SSDs; DATA: 15TB (4x 3.84TB) NVME SSDs
Network
Dual 10/25/40/50/100/200GbE, 8x 200Gb/s HDR InfiniBand
Operating System
Ubuntu Linux OS - See Datasheet for Details

NVIDIA® DGX A100 640GB

GPU
8x NVIDIA A100 Tensor Core 80GB (Total 640GB, 5 petaFLOPS AI)
CPU
2x 64-core AMD Rome 7742 2.25GHz (Total 128 cores, 3.4GHz max boost)
RAM
2TB
Storage
OS: 2x 1.92TB NVME SSDs; DATA: 30TB (8x 3.84TB) NVME SSDs
Network
Dual 10/25/40/50/100/200GbE, 8x 200Gb/s HDR InfiniBand
Operating System
Ubuntu Linux OS - See Datasheet for Details

NVIDIA QUADRO RTX Server

The NVIDIA QUADRO RTX Server is a highly configurable server reference design that provides the power you need to boost rendering performance on desktop, accelerate offline rendering and provision high-performance virtual workstations, all in a single, flexible solution.

The following is an example configuration:

GPU
4 x NVIDIA RTX A6000 48GB GPUs
CPU
2 x AMD EPYC 7452, 2.35GHz, 32C/64T
RAM
256GB DDR4
Storage
4x 1.92 TB SSD RAID 0
Networking
Dual 10 GbE

Altair TYPHOON A100

GPU
10x NVIDIA® Tesla™ A100 (40GB – PCIE)
CPU
2 x AMD EPYC 7452, 2.35GHz, 32C/64T
RAM
1TB GB DDR4 RAM
Graphics
VGA on board
IPMI
Dual 10 GbE
Storage
2x 3.5TB SSD
Powersupply
3,200Watts (2+1 Redundancy) Efficiency 80 plus Platinum
Chassis
4U Server 19” Rackmountable with high efficiency cooling
Operating System
Windows, Linux

Altair 2U 4XA100 - 2xA32-256

GPU
4x NVIDIA® Tesla™ A100 (40GB PCIE)
CPU
2x AMD EPYC 7452, 2.35GHz, 32C/64T
RAM
256GB DDR4 RAM
Graphics
VGA on board
IPMI
IPMI v2.0 compliant
Storage
3.5TB SSD
Powersupply
2000W redundant Gold Level
Chassis
2U 19” Rackmount Chassis with high efficiency cooling
Operating System
Windows, Linux

Altair 1U TS 2XA100 - 1xA32-128

GPU
2x NVIDIA® Tesla™ A100 (40GB PCIE)
CPU
1x AMD EPYC 7452, 2.35GHz, 32C/64T
RAM
128GB DDR4 RAM
Graphics
VGA on board
IPMI
IPMI v2.0 compliant
Storage
3.5TB SSD
Powersupply
1600W redundant Gold Level
Chassis
1U 19” Rackmount Chassis with high efficiency cooling
Operating System
Windows, Linux

Altair 2U TS 4XA100 SXM - 2xA32-512

GPU
4x NVIDIA® Tesla™ A100 (40GB/80GB SXM)
CPU
2 x AMD EPYC 7452, 2.35GHz, 32C/64T
RAM
512GB DDR4 RAM
Graphics
VGA on board
IPMI
IPMI v2.0 compliant
Storage
3.5TB SSD
Powersupply
1600W redundant Gold level
Chassis
2U 19” Rackmount Chassis with high efficiency cooling
Operating System
Windows, Linux

Workstation Model Information

Single Socket Workstations

NVIDIA® DGX Station™ A100

GPU
4 x NVIDIA A100 Tensor Core 80GB Cards (Total 320GB, 2.5 petaFLOPS AI)
CPU
Single AMD 7742, 64 cores, 2.25 GHz (base)–3.4 GHz (max boost)
RAM
512GB DDR4 RAM
Storage
OS: 1x 1.92 TB SSD; DATA: 3x 1.92 TB SSD RAID 0
Network
Dual 10 Gb LAN
Graphics
4x Mini DisplayPort
Acoustics
35 dB (Water-Cooling)
Powersupply
1,500 W
Software
Ubuntu Desktop Linux OS, DGX Recommended GPU Driver, CUDA Toolkit

Dual Socket Workstations

Altair TWS 4xGV100-2xI16-256 (Deep Learning WS)

GPU
4 x NVIDIA GV100 32GB GPUs
CPU
2x Intel Xeon Gold 6226R, 2.90GHz, 16C/32T
RAM
256GB GDDR4 PC2400 ECC
Graphics
VGA on board
IPMI
IPMI v2.0 compliant
Storage
4x 1TB Enterprise SSD
Powersupply
2000W redundant Gold Level
Chassis
Workstation
Chassis
Windows, Linux

Altair TWS 4xRTXA6000-2xA32-256 (Virtualization WS)

GPU
4 x NVIDIA RTX A6000 48GB GPUs
CPU
2x AMD EPYC 7452, 2.35GHz, 32C/64T
RAM
256GB GDDR4 PC2400 ECC
Graphics
VGA on board
IPMI
IPMI v2.0 compliant
Storage
4x 1TB Enterprise SSD
Powersupply
2000W redundant Gold Level
Chassis
Workstation
Chassis
Windows, Linux

Latest News

Rendering

Rendering

Accelerate your rendering with the new Quadro RTX server

Read More

Edge Computing

Edge Computing

Bringing real-time AI to the edge with the new PNY EGX server

Read More

AI

AI

Take your AI research to new heights

Read More