Unprecedented 64 ports of 400Gb/s NDR InfiniBand in a 1U form factor—delivering 51.2Tb/s aggregate throughput and 66.5 billion packets per second with SHARPv3 in-network acceleration for extreme-scale AI and HPC environments.
Based on NVIDIA Quantum-2 InfiniBand architecture, the MQM9790-NS2F integrates SHARPv3 for in-network reductions, RDMA (Remote Direct Memory Access), adaptive routing, enhanced virtual lane (VL) mapping, and congestion control. It supports NDR 400Gb/s and is backward compatible with HDR100, HDR, EDR, and FDR. The switch runs MLNX-OS (unmanaged version relies on external UFM or OpenSM for advanced fabric management).
Unlike conventional switches that only forward packets, the MQM9790-NS2F performs computations on data as it traverses the fabric. Using dedicated silicon, it aggregates and reduces data for collective operations (e.g., all-reduce, barrier) at wire speed, drastically reducing traffic between endpoints. This enables linear scalability for AI training clusters and HPC simulations. The embedded x86 Coffee Lake i3 processor manages fabric initialization and topology discovery, while data acceleration occurs in dedicated hardware.
| Model | Ports & Speed | Switching Capacity | Management | Airflow | Power Supply | Dimensions (HxWxD) |
|---|---|---|---|---|---|---|
| MQM9790-NS2F | 32 OSFP (64 x 400Gb/s) or 128 x 200Gb/s | 51.2 Tb/s non-blocking | 66.5 BPPS | Unmanaged (ext. UFM/OpenSM) | P2C (forward) | 2 x AC (1+1 redundant), 80+ Gold | 43.6 x 438 x 660 mm (1U) |
| MQM9790-NS2R | 32 OSFP (64 x 400Gb/s) or 128 x 200Gb/s | 51.2 Tb/s | 66.5 BPPS | Unmanaged | C2P (reverse) | 2 x AC, redundant | Same |
| MQM9700-NS2F | 32 OSFP (64 x 400Gb/s) or 128 x 200Gb/s | 51.2 Tb/s | 66.5 BPPS | Managed (on-board MLNX-OS) | P2C | 2 x AC, redundant | Same |
Note: All models include x86 Coffee Lake i3 CPU, 8GB DDR4, M.2 SSD 16GB, 1x USB 3.0, 1x I²C USB, 1x RJ45 (UART). Weight: ~14.5kg. Cables: OSFP passive/copper/active optical.
We offer full lifecycle support: 24/7 technical consultation, RMA handling, and on-site assistance. Our team provides topology design, firmware compatibility validation, and integration with existing InfiniBand fabrics. All units carry a 1-year warranty (extendable). Orders ship within 24-48 hours from our 10M+ inventory. Custom cabling and network card bundles available.
A: Yes, it fully supports ConnectX-6 (HDR/EDR) and ConnectX-5/4. For 400Gb/s operation, use ConnectX-6 or newer network cards that support NDR.
A: Each OSFP connector houses 8 lanes. By splitting a 400G (8 lanes) port into two independent 200G (4 lanes) connections, the switch effectively doubles the port count for 200G endpoints—ideal for high-density ToR.
A: Yes, it includes an onboard subnet manager capable of managing up to 2000 nodes out-of-the-box. For advanced features like UFM, connect an external server running OpenSM or NVIDIA UFM.
A: The MQM9790-NS2F has P2C (port-to-power/forward) airflow. If your rack uses rear-to-front cooling, order MQM9790-NS2R (C2P). Hot-swappable fan units support both directions.
A: Depending on utilization and cable types, typical power draw ranges from 250W to 350W at half load. The switch dynamically reduces power when ports are idle.
With over a decade of experience, we operate a large-scale factory backed by a strong technical team. Our extensive customer base and domain expertise enable us to offer competitive pricing without compromising on quality. As authorized distributors for Mellanox, Ruckus, Aruba, and Extreme, we stock original network switches, network card (nic card) solutions, wireless Access Points, controllers, and cabling. We maintain a 10 million USD inventory to ensure rapid fulfillment across diverse product lines. Every shipment is verified for accuracy, and we provide 24/7 consultation and technical support. Our professional sales and technical teams have earned a high reputation in global markets—partner with us for reliable infrastructure solutions.