Eric Updyke, CEO at Spirent Communications . The high-speed ethernet (HSE) industry is undergoing an unprecedented boom, and we only need two letters to explain why: AI. Millions of graphics processing units (GPUs) and other accelerators are being deployed for AI infrastructures.
Data center capital investment is climbing to half a trillion dollars. Shipments of 400G and 800G ethernet ports are exploding, exceeding even the most optimistic analyst projections . We’re witnessing a paradigm shift that will transform the ethernet ecosystem.
But how exactly will this transformation play out? Which technologies will dominate data center infrastructures in the coming years, and why? What do companies building out AI clusters know today that they didn’t before? And why is testing for these infrastructures proving so difficult—to the point that doing it effectively is becoming a strategic advantage? In 2023, we worked with stakeholders across the HSE ecosystem—hyperscalers, service providers, enterprises, network equipment manufacturers (NEMs) and others—totaling 340 engagements worldwide. Through this work, we’ve learned a great deal about where the industry is headed. The bottom line? This is ethernet’s most consequential evolution in decades.
According to Dell’Oro proprietary research, the HSE market will grow from 70 million ports shipped in 2023 to more than 240 million between 2024 and 2026. This growth will be driven from the top, as hyperscalers demand faster, more efficient networks to support exponential growth in cloud and AI traffic. But demand is surging across the rest of the ecosystem as well.
Why are operators choosing high-speed ethernet for AI clusters? This hasn’t always been the case. Historically, some used ethernet, while others favored lossless InfiniBand (especially for large model-training clusters) and others used their own proprietary connectivity. Increasingly though, ethernet is taking the lead for these networks.
It’s projected that ethernet port shipments will overtake InfiniBand by 2028. Port speed evolution depends on which “AI network” you mean. Front-end infrastructures that ingest training data, for example, will largely continue using 400G ethernet through 2025.
For back-end AI training and inferencing networks, however—the networks connecting all those specialized GPUs—the future starts now. According to Dell’Oro Group , the majority of switch ports deployed in AI back-end networks will be 800G ethernet by 2025 and 1.6-terabit ethernet by 2027 .
To understand why AI is having such a profound effect on data centers and the HSE market, we need to appreciate just how extreme the demands are that these workloads place on data center networks. According to Dell’Oro Group , multiple large AI models already process trillions of dense parameters, and that number is increasing tenfold every year . To meet this explosive demand, data center operators are deploying GPUs and other accelerators (xPUs) as quickly as possible, scaling to thousands, even tens of thousands of distributed nodes.
And they’re building separate, scalable back-end ethernet networks to connect them, increasingly via spine-leaf architectures using RDMA over converged ethernet version 2 (RoCEv2) protocol. These back-end networks demand extreme scalability and bandwidth approaching 1 Tbps per xPU, but that’s just the start. They must support thousands of synchronized jobs in parallel, bursty east-west traffic patterns and data- and compute-intensive workloads.
Critically, they must deliver extremely low network latency with zero packet loss to optimize job completion times, since even a single delayed flow can impede all nodes in the cluster. Why are packet loss and latency so deadly for AI workloads? It’s a function of the massive investments operators are making to build these infrastructures. Look at it this way: When an AI cluster reaches a scale of thousands of distributed xPUs, the back-end network effectively becomes the computer.
If it’s not operating efficiently, those delays translate to serious costs. A 1% packet loss rate, for example, can degrade performance by 30% or more. If you spend $1 billion to build an AI infrastructure, and your xPUs are sitting idle a third of the time, that equates to hundreds of millions in lost value over the life of that investment.
It’s why network performance and efficiency are so critical to AI—and ultimately, to every data center, service provider and enterprise network running AI workloads. It’s among the biggest lessons customers have learned in this fast-evolving space. To meet this challenge, ethernet itself must evolve.
Indeed, multiple lossless ethernet efforts are now under way, most notably the ultra-ethernet transport (UET) specification that optimizes congestion control and RDMA over ethernet for AI workloads. But operators must also be able to conduct exhaustive performance testing and validation for planned network designs—ideally before deployment. This is more difficult than it might seem.
Previously, the only real way to test AI fabrics was with actual AI traffic running on full-scale server farms. Effectively, you needed an AI data center to test an AI data center—an exorbitantly expensive proposition, if even possible. Fortunately, the state of the art is evolving here, too.
Today, a new generation of AI network testing solutions can help organizations thoroughly stress-test these fabrics and identify potential bottlenecks before they deploy. These innovations are helping the AI leaders—and soon, other parts of the ecosystem—continually test and verify both planned and existing AI infrastructures. They give organizations a means to optimize network performance at a much lower cost, so that everyone—including stakeholders across the ethernet ecosystem—can benefit from the AI revolution.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?.
Technology
How AI Data Centers Are Shaping The Future Of Ethernet

The high-speed ethernet (HSE) industry is undergoing an unprecedented boom, and we only need two letters to explain why: AI.