In the high-stakes world of High-Performance Computing (HPC) and AI, the network is often the unsung bottleneck. While CPUs and GPUs grab the headlines, the “plumbing” that connects them determines whether a cluster hums or stutters.

During ASI’s recent Technology Summit, Cornelis Networks—detailed how they are outperforming traditional InfiniBand with Omni-Path and preparing for a future with Ultra Ethernet.

A Heritage of Speed: From Intel to Independence

Cornelis isn’t a new player; it’s a specialized team with a deep pedigree. By combining Intel’s Omni-Path technology with intellectual property from Q-Logic and Cray’s Aries interconnects, Cornelis has built a fabric designed from the ground up for one thing: maximizing application performance.

Today, the company is shipping its second-generation product, the CN5000 (400Gbps), with a roadmap that leads directly into the emerging Ultra Ethernet standard.

Why the Network Matters: Latency vs. Bandwidth

While many focus on “line rate” (how much data moves), Cornelis argues that for parallel processing, Latency and Message Rate are the true kings.

  • Lowest Latency: The CN5000 delivers up to 45% lower latency than 400G InfiniBand (NDR).
  • Highest Message Rate: Cornelis reaches roughly 800 million messages per second, nearly doubling the throughput of competing solutions.
  • Real-World Impact: In standard HPC benchmarks (manufacturing, physics, life sciences), this translates to applications running up to 2x faster on the same 8-node cluster.
The “Special Sauce”: Technical Advantages

How does Cornelis achieve these numbers? It comes down to a few architectural “moats”:

  1. Credit-Based Flow Control: Unlike standard Ethernet, which can drop packets during congestion, Cornelis is lossless by nature.
  2. Fine-Grained Adaptive Routing (Packet Spraying): Instead of sending a flow down a single path, Cornelis sprays packets across tens of paths, choosing the least congested route at every switch hop.
  3. Link-Level Retry: Most networks require end-to-end retransmission if a bit flips. Cornelis corrects errors locally at the link, preventing application-wide stalls.
  4. Direct Hardware Access: Every process gets dedicated hardware resources. Small messages bypass the kernel and write directly to the hardware, using the fewest possible CPU cycles.
The Roadmap: CN6000 and CN7000

Cornelis is not just maintaining its current lead; it is evolving toward total ecosystem compatibility.

  • CN6000 (Late 2026): This 800Gbps generation introduces dual-protocol capabilities. The NICs will support both Omni-Path (for performance) and standard Ethernet (for storage/compatibility) simultaneously.
  • CN7000 (2027): This marks the transition to a full Ultra Ethernet solution. It will feature a 1.6 Terabit NIC and a high-density 72-port switch.

“We will deliver the best performing standard Ultra Ethernet solution… but if it’s Cornelis talking to Cornelis, you get even better performance.”

Hardware Innovation: The Director Class Switch

One of the most impressive engineering feats discussed was their Director Class Switch. By utilizing a “midplane-less” design—where horizontal spine blades plug directly into vertical leaf blades—Cornelis has shrunk the equivalent of 36 rack units of switching into just 17 rack units. This results in a 33% power saving and significantly lower TCO for large-scale deployments.

The Bottom Line

Cornelis Networks is positioning itself as the “open” alternative to proprietary interconnects. By upstreaming their software to the Linux kernel and leading the charge with LibFabric, they are ensuring that moving to a higher-performance network doesn’t require rewriting a single line of application code.

As AI models grow and HPC simulations become more complex, the industry is moving away from “good enough” networking. With a roadmap tied to Ultra Ethernet and a current performance lead over InfiniBand, Cornelis is a name every data center architect should have on their radar.

Additional Resources

Cornelis Customer Webinar – ASI Technology Summit

ASI Blog – Cornelis – Network Performance for AI Inference and Edge Computing

ASI Blog – The Token Economy: Maximizing AI Efficiency at the Edge with Cornelis Networks