Cisco Unveils P200 Chip to Link AI Data Centers Across Vast Distances

Cisco Unveils P200 Chip to Link AI Data Centers Across Vast Distances

Cisco Systems has rolled out a new networking chip — dubbed the P200 — designed to interconnect distant AI data centers and enable them to function as unified computing clusters. The move signals Cisco’s bet on the infrastructure demands of large-scale artificial intelligence operations.

The P200 chip will power a newly introduced router optimized for bridging data centers separated by hundreds or even thousands of miles. Cisco says the solution replaces what previously required 92 separate chips and drives energy savings: the new router consumes 65% less power than comparable systems.

From Many Chips to One, Across Distance

In modern AI deployments, training massive models can strain the capabilities of a single data center. To overcome computing, cooling, and power constraints, firms often spread infrastructure across multiple sites. Cisco’s P200 chip is built to make those distributed centers operate like a single, synchronized entity.

“Now we’re saying, ‘the training job is so large, I need multiple data centers to connect together,’” explained Martin Lund, Cisco’s executive vice president for common hardware. “And they can be 1,000 miles apart.”

Because AI workloads generate massive data bursts, keeping data in sync across far-flung centers without loss demands sophisticated buffering — a capability Cisco has been refining for decades.

Power Inefficiency, Expansion, and Geography

One reason data centers are geographically dispersed is power availability. AI systems demand tremendous electricity, and firms are locating centers where they can secure abundant, affordable energy. That trend has led major tech players like Oracle, OpenAI, and Meta to set up operations in regions like Texas and Louisiana.

By positioning themselves in power-rich locales, AI companies optimize capacity, but they also create the challenge of making those remote sites work together. Cisco’s P200 approach aims to address that gap.

Microsoft and Alibaba are among the first customers announced for Cisco’s new solution. The P200 router is intended to help those tech giants better orchestrate AI workloads across multiple clouds or data regions.

Competitive Landscape & Strategic Impact

Cisco’s P200 is expected to compete with offerings from Broadcom, a major player in networking and data center components.

While Cisco has not disclosed the investment behind the P200 initiative or projected sales, the firm is placing a strong bet on the increasing architectural complexity of AI infrastructure.

Microsoft’s Azure Networking group praised the development, saying: “We’re pleased to see the P200 providing innovation and more options in this space,” referring to its advanced buffering and synchronization capabilities.

As AI models grow larger and more computationally demanding, the importance of linking data centers efficiently becomes more critical. With the P200 chip, Cisco is positioning itself at the forefront of that challenge — aiming to power the backbone of future AI systems at scale.