PNY NVH100NVLTCGPU-KIT H100 NVL Graphics Card94 GB HBM3 PCIe 5.0 x16 Passive Cooling 2x Slo
- Description
- Additional information
- Reviews (0)
Description
NVIDIA H100 NVL
Unprecedented Performance, Scalability, and Security for Every Data Center
H100 NVL is designed to scale support of Large Language Models in mainstream PCIe-based server systems. With increased raw performance, bigger, faster HBM3 memory and NVLink connectivity via bridges, mainstream systems configured with 8x H100 NVL outperform HGX A100 systems by up to 12X on GPT3-175B LLM throughput.
H100 NVL enables standard mainstream servers to provide high-performance large language model generative AI inference while enabling partners and solution providers the fastest time to market and ease of scale out.
Additional information
FP64 | 30 TFLOPS |
---|---|
FP64 Tensor Core | 60 TFLOPS |
FP32 | 60 TFLOPS |
TF32 Tensor Core | 835 TFLOPS, Sparsity |
BFLOAT16 Tensor Core | 1671 TFLOPS, Sparsity |
FP16 Tensor Core | 1671 TFLOPS, Sparsity |
FP8 Tensor Core | 3341 TFLOPS, Sparsity |
INT8 Tensor Core | 3341 TOPS |
GPU Memory | 94GB HBM3 |
GPU Memory Bandwidth | 3.9TB/s |
Maximum Thermal Design Power (TDP) | 350-400W (Configurable) |
NVIDIA AI Enterprise | Included |
Only logged in customers who have purchased this product may leave a review.
Reviews
There are no reviews yet