
I have been in the network switching business my entire career which started at DEC. I was part of the team that introduced Ethernet to the world, and I remember when its 10 Mbps seemed like lightning compared to the 56K and “T1” point-to-point leased lines we were all accustomed to. And while Ethernet sounded amazing to everyone we spoke to about it, basic deployment requirements were still being worked out; like how to use twisted pair to carry ethernet which was originally designed to use coax cable as a media and how to build switch elements (rather than repeaters) that were transparent, cost effective and could greatly extend the distance and scale of the otherwise very local area network technology. So, it makes me smile when I think about how far we’ve come, and hear about new chipsets like Broadcom’s latest Tomahawk 6
Broadcom’s ‘merchant silicon’ lineage has long been synonymous with high-performance network switching, forming the backbone of data centers and hyperscale environments worldwide. With each iteration of Broadcom’s merchant silicon offerings, they have pushed the boundaries of bandwidth, density, and feature sets, consistently delivering the innovation needed to keep pace with an ever-accelerating digital landscape. The recent announcement of the Tomahawk 6, their sixth-generation flagship switching chip, marks yet another significant leap forward, crossing the 100 Tbps threshold and cementing its role as a foundational element for the global AI infrastructure build-out. (And as a point of reference, 100 Tbps is 5 orders of magnitude higher bandwidth than that first generation of Ethernet pioneered by Intel, DEC and Bob Metcalf at Xerox circa 1978)
But this shouldn’t surprise anyone since over the past decade Broadcom’s switching chip releases have demonstrated a relentless march towards higher performance and greater efficiency all in a software-defined package. Broadcom proved that switching chips need not be fixed function which require costly and time-consuming new ASIC ‘spins’ to resolve bugs or performance problems. From their early chips establishing multi-terabit capacities to subsequent generations integrating advanced telemetry and programmable pipelines, each product has built upon the last, culminating in the powerhouse that is “Tomahawk 6”. This progression hasn’t just been about raw speed; it’s been about creating increasingly intelligent and adaptable network silicon capable of handling the most demanding workloads. In in the era of AI, the latest Tomahawk’s support for cognitive routing and ultra ethernet (read “lossless”) immediately becomes the standard-bearer that all other solutions will be measured against. So, it’s no wonder that the Dell’Oro group recently reported that the demand for Ethernet for AI back-end networks is now EXCEEDING all other technologies.
Tomahawk 6: A Closer Look at Breakthrough Innovation
Like I said above, it’s not just about speed. The Tomahawk 6 isn’t merely an incrementally faster chip; it represents a substantial re-engineering in approach to meet the unprecedented demands of AI training and inference.
Here are five of its most impactful new or significantly improved features compared to its predecessors:
1.Unprecedented Bandwidth and Port Density:
The Tomahawk 6 shatters previous bandwidth records, offering an astonishing 102.4 Tbps of switching capacity. This translates to the ability to support up to 64 ports of 1.6T, 128 at 800G, 256 at 400G or 512 Ports at 200G in a single chip! This massive increase in bandwidth is absolutely critical for building AI clusters, where hundreds or thousands of GPUs need to communicate with minimal latency and maximum bandwidth to share vast datasets and model parameters. This density allows for incredibly flat and efficient network topologies.
2.Ultra-Low Latency for AI/ML Workloads:
One of the most significant challenges in large-scale AI training is minimizing inter-GPU communication latency. The Tomahawk 6 introduces substantial architectural enhancements specifically designed to reduce latency across the fabric. This is achieved through optimized packet processing, reduced buffering delays, cognitive routing and advanced traffic management algorithms. In AI, even microsecond reductions in latency can translate to significant improvements in model training times, reductions in cost and overall cluster efficiency.
3.Enhanced Congestion Management and Flow Control:
AI workloads are characterized by “elephant flows” – massive, continuous data transfers between compute nodes. Effectively managing these flows without introducing bottlenecks is paramount. Tomahawk 6 incorporates more sophisticated congestion management mechanisms, including advanced ECN (Explicit Congestion Notification) capabilities and intelligent buffer management. These features ensure that even under peak load, data moves smoothly and efficiently, preventing performance degradation in sensitive AI applications. And as Ultra-Ethernet and it’s advanced flow capabilities gains traction, the Tomahawk 6 provides full support for it too.
4.Advanced In-Band Network Telemetry (INT) and Visibility:
As networks grow in complexity, visibility into their core performance becomes critical for troubleshooting and optimization. Tomahawk 6 significantly enhances Broadcom’s already robust telemetry capabilities which have been available across the Tomahawk generations. Tomahawk 6 provides deeper, more granular in-band network telemetry, allowing operators to monitor network state, identify micro-bursts, and pinpoint performance anomalies in real-time. This level of visibility is invaluable for maintaining the health and performance of high-stakes AI infrastructure, where even minor issues can halt expensive training jobs.
5.Greater Programmability and Feature Set Flexibility:
The Tomahawk 6 continues Broadcom’s commitment to programmable pipelines, offering increased flexibility for network operators to customize packet processing and implement innovative network functions. This programmability is vital in the rapidly evolving AI landscape, where new protocols and optimized data paths may be required. Furthermore, it allows hyperscalers to differentiate their networks and integrate proprietary optimizations, giving them a competitive edge.

The Indispensable Role of Tomahawk 6 in the AI Revolution
The global AI infrastructure build-out ‘frenzy’ is unlike anything seen before. It has caused data centers the size of Manhattan to be proposed, fueled a resurgence of the once abandoned nuclear power to be seen as a key element, become the catalyst for more than 1000 ‘AI’ startups to be funded, and created more than a million Millionaires.
AI demands networks that can not only handle immense bandwidth but also operate with unprecedented low latency, high reliability, and granular control. This is where the Tomahawk 6 truly shines.
- Scaling AI Superclusters: Training the most advanced AI models requires distributing workloads across thousands of GPUs. The Tomahawk 6’s 1.6Tbps capabilities and high port density enable the creation of extremely flat, high-radix network topologies that minimize hop counts and maximize bandwidth between these distributed compute resources. This is essential for preventing network bottlenecks from becoming the limiting factor in AI model development.
- Enabling Disaggregated Infrastructure: As AI workloads become more diverse and dynamic, the need for disaggregated compute, storage, and acceleration resources grows. Tomahawk 6-powered networks provide the high-speed interconnects necessary for these components to function as a unified, high-performance system, allowing for flexible resource allocation and maximizing utilization. In essence the Tomahawk 6 allows all of these resources become available ‘at wire speed’.
- Future-Proofing for AI Innovation: The software-defined programmable nature and advanced feature sets of Tomahawk 6 offer a huge degree of future-proofing, something we never had in the days of fixed feature ASIC spins. As AI algorithms and network protocols evolve, the underlying network infrastructure built on Tomahawk 6 will be adaptable, ensuring longevity and protecting significant infrastructure investments.
So putting all of this together, IT professionals that grew up on Ethernet are finding themselves in somewhat familiar territory, but at a level never previously imagined. Gone are the days of bespoke ASIC chip designs by individual network device manufacturers as the software definition of merchant high performance platforms is here now. Broadcom’s Tomahawk 6 demonstrates the high end of what’s possible. It is more than just another high-speed network chip; it’s a critical enabler for the next wave of AI innovation. Its unprecedented bandwidth, ultra-low latency, and advanced management features are precisely what the burgeoning AI industry needs to unlock new capabilities and scale to meet the demands of a data-driven world. As the AI revolution continues to play out, the Broadcom switching families (including the latest Tomahawk 6) will play an indispensable role in powering the intelligence that shapes our future. And as a long-term provider of more than half of the world’s Whitebox solutions, Accton will be leading from the front, continuing to deliver the highest quality open software-defined infrastructure based upon these Broadcom chipsets needed by the Hyperscale, Enterprise and Service Provider communities.
Если у вас есть какие-либо комментарии, запросы или вопросы относительно наших продуктов и услуг, пожалуйста, заполните следующую форму.
ПОСЛЕДНИЕ БЛОГИ
30 сентября 2025 г.