Posted on 1 Comment

Optimizing Ethernet in Data Center Networks

Feature Ethernet Data Center Networks - AnD Cable Management Blog

Demand for faster data transfer, and more of it, has exploded exponentially over the last decade. Even before the pandemic, growth was already at exponential rates, but with the work from anywhere trend and more people gaming and streaming from home, demand rose even further. 

With it came an explosion in innovation, and a necessary one. Data Center Interconnects (DCI) Ethernet cable speeds increased from 100 Gb applications to 400 Gb and beyond. Server speeds have gone from 10 Gb to 25 Gb and beyond, with 100 Gb speeds on the horizon, and already in place in some data centers. 

The result is that data centers are now frequently operating like edge computing networks. Here is how it works. 

Ethernet Data Center Networks - AnD Cable Management Blog
Ethernet cable speeds have increased from 100 Gb applications to 400 Gb and beyond

Optimizing Ethernet in Data Centers

There are four factors in optimizing data center ethernet use: speed, power, reach, and latency. Speed is already being enhanced and optimized by the creation of better and more modern cable designs. But for the other areas, there is still work to be done. 

Power

When it comes to power, many data centers have gone green, with their own renewable energy sources. In most cases, they have access to all the power they need. The key is to use it in the most efficient way possible. With more power comes the issue of design, including hot and cold aisle design choices and more. 

Reach

Data center architecture must take a holistic approach, whether you are starting from scratch with a new data center or making moves and changes to update its current infrastructure. Everything from switches and routers to transceivers and overall physical design, reach must be weighed by efficiency vs. cost.

Latency

Finally, latency is related to the final user experience. When it comes to gaming or video conferencing, low latency is the expectation, while when conducting internet searches, it’s not as critical, but can still be an issue for users. As speed increases and fast becomes the norm, latency expectations change with it. 

These three areas are critical to how ethernet is used in data centers, but it is far from the only one. 

Definitive Guide to Understanding Ethernet Patch Cords in Modern Networks - AnD Cable Products Whitepaper
Ethernet cables differences, RJ45 Connectors and T586B vs T568A

Infrastructure Processing Units

How we manage this need for speed is changing on the hardware and software side of things as well. Infrastructure Processing Units (IPUs) run Software Defined Networking (SDN) programs away from the server core. This saves critical server bandwidth, but it comes with an additional load cost. 

As these advances develop, the demand for new and better ethernet cables arises. And as ethernet cables advance, IPUs hardware and software applications evolve as well. Both improve in sync with the other. It’s a developing relationship, but one data center manager’s must take advantage of. 

Edge Computing Centers 

One solution to speed is to move the data center closer to the end user. This has been a developing trend, but increasingly data centers are expanding to distributed models where the interconnections between resources drive both power and speed, creating a better overall experience for the end user, and reducing latency. 

This comes with challenges. As edge computing rapidly becomes the norm, that latency KPI gets lower and lower. Low latency is key, and specifically, DCI applications are critical to meeting new standards. Ethernet connections are a vital part of this change and growth.

The Need for Speed

What’s needed to make all of this work? The first is optical transceivers, which allow data centers to make reductions in the power they use, but enables them to increase bit rates at the same time. This allows for the increase of speed in the leaf-spine connections, a critical component in any data center, but especially those that are hyperscaling. 

This does not come without challenges, as not all ethernet cables are created equally, and interoperability can become an issue. 

To help with this, high-speed breakout cables are often used. These cables have one end that supports the aggregate rate and the other end is a series of disaggregated interfaces. With their speed comes performance challenges, especially over distances. However, there has been some rapid development in this area. 

The New Normal

As 400 Gb speeds become the norm and data centers are increasingly on the edge, there are many advantages. Distributed networks mean easier disaster recovery and backup planning and create the ability to use shared resources to meet shifting demands. 

However, this creates some challenges with testing and maintaining KPIs. Interoperability remains a key component of successful deployments. 

At AnD Cable Products, we understand these challenges. We offer everything your data center needs, from Zero U rack solutions to every type and style of cable you need. We can customize cables for your application, and offer a variety of other hardware solutions to meet your data center needs. When you are ready to upgrade your cables, make moves and changes, or even deploy a new data center or edge computing center, contact us. We’d love to be your partner in innovation

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

1 thought on “Optimizing Ethernet in Data Center Networks

  1. The need for quick data transfer and better data optimization is always there. Thank you very much for sharing this post here regarding data center networks. Keep posting.

Leave a Reply

Your email address will not be published. Required fields are marked *