200Gb/s Single-Port HDR Smart Adapter with In-Network Computing & Hardware Encryption
The NVIDIA ConnectX-6 MCX653105A-HDAT delivers full 200Gb/s throughput on a single QSFP56 port, combining ultra-low latency, hardware offloads, and block-level XTS-AES encryption. Designed for HPC, AI clusters, and NVMe-oF storage, this PCIe 4.0 x16 adapter offloads collective operations, RDMA, and encryption from the CPU, maximizing application performance and scalability in demanding data center environments.
The MCX653105A-HDAT belongs to the NVIDIA ConnectX-6 InfiniBand adapter family, engineered for extreme performance in modern data centers. This single-port QSFP56 card supports up to 200Gb/s (HDR InfiniBand or 200GbE) with full hardware acceleration for RDMA, reliable transport, and In-Network Computing. By integrating collective operations offloads, MPI tag matching, and NVMe over Fabrics acceleration, the adapter significantly reduces CPU overhead while boosting fabric efficiency. Its built-in AES-XTS block-level encryption ensures data security without performance penalty, making it ideal for financial services, government research, and hyperscale cloud deployments.
NVIDIA ConnectX-6 integrates In-Network Computing acceleration engines that offload critical datacenter operations from the host CPU. The MCX653105A-HDAT supports hardware-based reliable transport, adaptive routing, and congestion control, ensuring predictable performance in large-scale fabrics. Remote Direct Memory Access (RDMA) enables zero-copy data transfers, bypassing the OS kernel. With NVIDIA GPUDirect RDMA, GPU memory communicates directly with the network adapter, slashing latency for AI training and HPC simulations. Built-in block-level XTS-AES encryption (256/512-bit key) ensures data-in-transit and data-at-rest security with no CPU overhead, and the adapter is designed to meet FIPS 140-2 compliance requirements.
The ConnectX-6 MCX653105A-HDAT interoperates seamlessly with NVIDIA Quantum InfiniBand switches (HDR 200Gb/s), standard 200GbE switches, and a wide range of server platforms. It supports major operating systems and virtualization stacks, ensuring flexible integration into existing infrastructure.
| Parameter | Specification |
|---|---|
| Product Model | MCX653105A-HDAT |
| Data Rate | 200Gb/s, 100Gb/s, 50Gb/s, 40Gb/s, 25Gb/s, 10Gb/s, 1Gb/s (InfiniBand and Ethernet) |
| Ports & Connector | 1x QSFP56 (supports passive copper, active optical, and AOC cables) |
| Host Interface | PCIe Gen 4.0 x16 (also compatible with Gen 3.0, 2.0; supports x8, x4, x2, x1 configurations) |
| Latency | Sub-microsecond (typical <0.7µs) |
| Message Rate | Up to 215 million messages per second |
| Encryption | XTS-AES 256/512-bit hardware offload, FIPS 140-2 ready |
| Form Factor | PCIe low-profile stand-up (tall bracket pre-installed, short bracket accessory included) |
| Dimensions (without bracket) | 167.65mm x 68.90mm |
| Power Consumption | Typical 22W – 24W (depends on link utilization) |
| Virtualization | SR-IOV (up to 1K Virtual Functions), VMware NetQueue, NPAR, ASAP2 flow offload |
| Management & Monitoring | NC-SI, MCTP over PCIe/SMBus, PLDM (DSP0248, DSP0267), I2C, SPI flash |
| Remote Boot | InfiniBand, iSCSI, PXE, UEFI |
| Operating Systems | RHEL, SLES, Ubuntu, Windows Server, FreeBSD, VMware vSphere, OpenFabrics Enterprise Distribution (OFED), WinOF-2 |
| Ordering Part Number (OPN) | Ports | Max Speed | Host Interface | Key Features |
|---|---|---|---|---|
| MCX653105A-HDAT | 1x QSFP56 | 200Gb/s | PCIe 3.0/4.0 x16 | Single-port, hardware crypto, full ConnectX-6 offloads, ideal for high-density servers |
| MCX653106A-HDAT | 2x QSFP56 | 200Gb/s (dual-port) | PCIe 3.0/4.0 x16 | Dual-port 200Gb/s with crypto, maximum bandwidth density |
| MCX653105A-ECAT | 1x QSFP56 | 100Gb/s | PCIe 3.0/4.0 x16 | Single-port 100Gb/s, cost-optimized for lower speed requirements |
| MCX653106A-ECAT | 2x QSFP56 | 100Gb/s (dual-port) | PCIe 3.0/4.0 x16 | Dual-port 100Gb/s, virtualization & storage offloads |
| MCX653436A-HDAT (OCP 3.0) | 2x QSFP56 | 200Gb/s | PCIe 3.0/4.0 x16 | OCP 3.0 small form factor, dual-port 200Gb/s |
Hong Kong Starsurge Group provides expert technical support, warranty coverage, and global RMA services for all NVIDIA ConnectX adapters. Our network specialists assist with driver installation, performance tuning, and fabric integration. We offer flexible pricing, bulk quotes for data center projects, and fast worldwide shipping. For customized solutions, contact our sales team to discuss lead times and volume discounts.

Since 2008, Hong Kong Starsurge Group Co., Limited has been a trusted provider of enterprise networking hardware, system integration, and IT services. As an authorized partner for NVIDIA networking solutions, Starsurge delivers genuine ConnectX adapters, switches, and cables to government, finance, healthcare, education, and hyperscale clients worldwide. Our experienced sales and technical teams ensure seamless deployment from pre-sales architecture to post-sales support, with a commitment to reliable quality and responsive service.
Global delivery · Multilingual support · Tailored OEM & integration services
| Component / Ecosystem | Support Status | Remarks |
|---|---|---|
| NVIDIA Quantum HDR InfiniBand Switches | ✓ Fully supported | 200Gb/s fabric, adaptive routing |
| 200GbE Switches (IEEE 802.3) | ✓ Compatible | Requires FEC modes per switch specification |
| GPU Direct RDMA | ✓ Yes | NVIDIA GPU series (Volta, Ampere, Hopper, etc.) |
| VMware vSphere 7.0/8.0 | ✓ Certified | Native drivers, SR-IOV support |
| Linux (RHEL, Ubuntu, SLES) | ✓ Full support | MLNX_OFED, inbox drivers available |
| Windows Server 2019/2022 | ✓ Supported | WinOF-2 driver package |