TLDR
- Cisco Systems announced Silicon One G300 switch chip Tuesday to compete for AI infrastructure dollars
- The chip accelerates AI tasks by 28% using TSMC’s 3-nanometer technology with automatic data rerouting
- Energy efficiency improves 70% for liquid-cooled systems while preventing network traffic bottlenecks
- Second half 2026 launch puts Cisco against Nvidia and Broadcom in networking chip competition
- Product will power new Cisco N9000 and 8000 systems across data center markets
Cisco Systems rolled out its answer to the AI networking challenge Tuesday. The Silicon One G300 switch chip targets the explosive $600 billion AI infrastructure market.
The product puts Cisco head-to-head with Nvidia and Broadcom. All three companies want a piece of the AI data center spending wave.
The G300 chip connects AI systems across hundreds of thousands of network links. Cisco expects to ship the product starting in the second half of 2026.
Taiwan Semiconductor Manufacturing Co will produce the chip using 3-nanometer manufacturing technology. The advanced process enables the chip’s speed and efficiency gains.
Speed Meets Efficiency
Cisco claims the G300 accelerates certain AI computing jobs by 28%. The improvement stems from intelligent automatic data rerouting capabilities.
Martin Lund serves as executive vice president of Cisco’s common hardware group. He explained the chip reroutes data around network problems in microseconds.
“This happens when you have tens of thousands, hundreds of thousands of connections – it happens quite regularly,” Lund said.
The chip packs “shock absorber” features to handle massive data traffic spikes. These prevent networks from slowing or crashing under heavy loads.
Energy consumption drops around 70% for 100% liquid-cooled systems. This matters as companies look to reduce data center power bills.
The G300 uses Intelligent Collective Networking technology. This system lets AI chips communicate efficiently across sprawling data center networks.
Race for AI Networking Control
Networking turned into a major competitive battleground for AI infrastructure. Nvidia showcased its own networking chip as part of a six-chip system last month.
Broadcom jumped in with its Tomahawk chip series. The products target the same data center customers Cisco wants to capture.
Cisco positions Silicon One as the industry’s most scalable and programmable unified architecture. The platform covers AI, hyperscaler, data center, enterprise, and service provider applications.
The G300 will drive upcoming Cisco N9000 and Cisco 8000 systems. These products aim to redefine AI networking capabilities in data centers.
Lund stressed Cisco’s focus on total end-to-end network efficiency. The company looks beyond raw speed to overall system performance.
Networks connecting hundreds of thousands of AI chips face constant problems. Small issues can cascade into major slowdowns without proper management.
The G300’s automatic rerouting stops problems before they spread. The chip makes these decisions faster than human operators can react.
Tech giants are pouring cash into AI infrastructure at unprecedented rates. The $600 billion spending boom shows how critical these systems became.
Cisco’s launch timing trails competitors already serving this market. Nvidia and Broadcom have products in customer data centers now.
The second half 2026 release gives Cisco time to perfect the technology. It also means playing catch-up while rivals potentially lock in customers.
Data traffic spikes happen regularly in large AI operations. The G300 handles these surges without breaking stride or dropping connections.
Cisco says the chip prevents network bottlenecks that plague large-scale AI deployments. This reliability matters for companies running critical AI workloads continuously.



