
Data centers are entering a new era. The explosive demand for artificial intelligence (AI) and machine learning (ML) has fundamentally changed how networks must operate. AI workloads are incredibly demanding, generating massive amounts of time-sensitive and “bursty” traffic that traditional networking approaches struggle to handle efficiently. And in the era of AI, that network inefficiency directly translates to real costs, as the smallest of network delays can idle millions of dollars of precious GPUs . The network can no longer be a passive pipe; it needs to be intelligent, adaptable, and… fast, with little or no latency and errors!
Enter the new Broadcom Tomahawk 6 switch chip series. We’ve been hearing about it for more than a year, and it’s still not quite in full production, but we’ve been showing our solutions based on the near-final chip for a few months. All I can say is, amazing!! This isn’t just about faster speeds (though it is incredibly fast, pushing up to 102.4Tbps of bandwidth); and it isn’t just about its high-radix capabilities (although it can sport and shocking 1024 100G SerDes on a single chip); it’s about a revolutionary new feature designed specifically for the age of AI which Broadcom calls Cognitive Routing 2.0. Now, many of us have mentioned this in casual conversations by name, but I wanted to provide a bit of detail on what CRv2 really entails. (This detailed understanding will make us all appreciate the game changer status of the Tomahawk 6).
So grab a coffee, and let’s dive into what makes Cognitive Routing 2.0 (CRv2) a game-changer.
What is Cognitive Routing, Anyway?
At its core, “cognitive routing” is a fancy term for Broadcom’s new and very intelligent, adaptive traffic-management engine. Think of it as a smart GPS for each of your data packets.
In traditional networks (including previous generation chips like the Tomahawk 5), routing is largely static or based on simpler algorithms. A data packet generally follows a pre-determined or simple path through the network fabric. This works fine for predictable, everyday traffic (like web browsing or email), but it falls apart when you have thousands of GPUs all trying to talk to each other at the exact same moment.
These AI “collective communication” patterns create massive, sudden traffic jams, known as congestion. In a traditional network, traffic hits a bottleneck and simply piles up, leading to delays (latency), dropped packets, retransmissions, and ultimately, much slower AI training times with direct negative impacts to the total costs and the utilization of the GPUs.
Cognitive Routing 2.0 in the Tomahawk 6 fundamentally changes this paradigm. It gives the network the ability to “think” and dynamically react to real-time conditions. Again, not just incrementally better that the previous chipsets, but re-engineered from the ground up to move packets smarter.
Tomahawk 6 vs. Tomahawk 5: The Evolution of Intelligence
Now don’t get me wrong, the Tomahawk 5 continues to be a powerhouse in its own right, offering leading bandwidth at 51.2 Tbps. It handles congestion using established and long proven mechanisms, but its routing is more traditional. When introduced, it demonstrated aw-inspiring performance but lacked the deep, proactive intelligence needed to manage AI centric traffic.
Коммутатор 1ТП7Т 6 pushes this need for ‘built from the ground up’ intelligence with Cognitive Routing 2.0, which introduces several critical improvements:
- Динамическая балансировка нагрузки: The Tomahawk 6 system actively monitors where traffic is flowing. When a path starts to get busy, the switch engine automatically steers new traffic down less congested routes. It’s like having a traffic controller instantly opening up new lanes on a freeway.
- Predictive Congestion Avoidance: Unlike older approaches that only reacted after a traffic jam started, the Tomahawk 6 uses sophisticated algorithms to anticipate bottlenecks before they happen, rerouting flows preemptively. The management engine inside the Tomahawk 6 is an integral part of the magic.
- Granular Visibility: This is a major differentiator from all previous solutions: The Tomahawk 6 incorporates enhanced, in-band network telemetry (INT). It provides incredibly detailed data about every single packet’s journey, allowing network operators to see exactly where micro-bursts and latency occur in real-time. This build-in level of visibility give rise to a level of diagnostics that was nearly impossible to achieve efficiently in prior generations of switching chipsets.

Why Does This Matter for AI?
Most owners and operators have by now realized that the majority of the x86-era data center infrastructures that were created in the past are unsuitable for AI. The power envelope alone is enough to abandon many of their production sites. The CIOs that have embraced this concept fully are now planning for their own next steps and working with architects and engineers to create AI-compatible data center designs. This is being done from the ground-up with a main focus on power and networking.
The design goal is simple: build a factory that can generate tokens at the least cost. Drilling down a bit, the goal is to finish their LLM training jobs as fast as possible to reduce the top-line costs associated with AI service delivery. For the network, that means a primary focus on sheer bandwidth and latency.
Tomahawk 6 and its Cognitive Routing 2.0 directly addresses these:
- Minimizing Tail Latency: It smooths out the peaks and valleys of network traffic, ensuring consistent, deterministic performance, which is vital for synchronized GPU communication. By intelligently moving packets more intelligently, utilization increases across the board.
- Maximizing Throughput: By keeping the entire network fabric balanced, it ensures no single link is sitting idle while another is completely overwhelmed. And with a chip-level capacity of more than 100 Tbps, there is plenty of room to move packets- fast!
- Ensuring Fairness: It prevents one massive flow from starving smaller, critical control signals, ensuring all parts of the AI cluster communicate effectively.
The Bottom Line
Broadcom’s Tomahawk 6, with its integrated Cognitive Routing 2.0 engine, isn’t just about faster speeds; it’s about a smarter, more resilient network infrastructure built for the unique demands of modern AI. It is purpose-built for AI, not just adding the word “AI” in front of an existing chip design (as so many other solutions are doing these days).
By moving away from static routing and embracing dynamic, intelligent AI-centric management, Broadcom is ensuring that the network keeps up with the incredible processing power of today’s GPUs, unlocking a new level of performance for hyperscalers and enterprises building the future of AI.
At Accton/Edgecore we have been demonstrating our pre-production Tomahawk 6 based switching solutions since the summer of 2025 and expect Broadcom’s Tomahawk 6 chip and our final switch systems to be GA in Q1 of 2026. Rest assured, we know what it takes to create robust solutions with this level of extreme performance. It’s simply not a task just any manufacturer can do, not at this level of power usage and sheer performance. The elegant engineering required to take advantage of chips like the Tomahawk 6 is aw-inspiring and we have been on the leading edge of doing so for more than 3 decades!
We make more than half of all of the world’s Whitebox switches. Talk to us about AI networking…
Если у вас есть какие-либо комментарии, запросы или вопросы относительно наших продуктов и услуг, пожалуйста, заполните следующую форму.
ПОСЛЕДНИЕ БЛОГИ
Ноябрь 11, 2025
30 сентября 2025 г.









