Posted on Leave a comment

Data Center Liquid Cooling – Is It Time for an Upgrade?

Featured image of a Liquid Cooling Data Center using immersion cooling

As the demand for cloud services, big data analytics and AI computations grows, data centers are housing increasingly dense and powerful computing equipment. This trend has led to higher heat loads, making efficient cooling not only desirable but necessary. In some situations, traditional air-cooled systems, once the backbone of data center cooling, are now being supplemented and even replaced by data center liquid cooling solutions.

In this article, we explore how far our cooling innovations have come and uncover the reality of today’s liquid cooling landscape. We’ll break down the tech news outlet hype around liquid-cooled data centers – what are the options? What makes it special? Is it suitable for every data center? And is this technological shift inevitable? Let’s dive in.

A Liquid Cooling Data Center using immersion cooling technology

Immersion Cooling Technology for Data Centers

Why is Liquid Cooling Superior?

Liquid cooling is superior in data centers due to its higher thermal conductivity – liquids conduct heat up to 1,000 times better than air – allowing it to efficiently remove heat directly from high-power computing components. 

This direct heat removal leads to significantly lower operational temperatures, enhancing the performance and longevity of sensitive electronic equipment. Additionally, liquid cooling systems are more energy-efficient than traditional air cooling, reducing operational costs and a creating a smaller carbon footprint.

Energy Savings

Another core benefit that liquid-cooled data centers enjoy is energy savings. In quantitative research conducted by NVIDIA and Vertiv, data centers that use liquid cooling systems reduced their total data center power consumption by 10.2% – an 18.1% reduction in facility power! From a financial perspective, this reduction is $740,000 less than from power-hungry data centers that consume $7.4 million annually.

Types of Data Center Liquid Cooling Systems

There are many data center liquid cooling systems in place – some more complex than others. However, these three are the most dominant ones in use today:

Direct-to-Chip Liquid Cooling

Direct-to-chip (D2C) cooling involves circulating a coolant directly over the heat-generating components, such as CPUs and GPUs. This method significantly increases cooling efficiency by removing heat directly at the source. D2C systems can use a variety of coolants, including water, dielectric fluids, or refrigerants, depending on the application’s needs and the desired cooling capacity.

Immersion Cooling

Immersion cooling takes liquid cooling a step further by submerging the entire server, or parts of it, in a non-conductive liquid. This technique is highly efficient as it ensures even and thorough heat absorption from all components. Immersion cooling is particularly beneficial for high-performance computing (HPC) and can dramatically reduce the space and energy required for cooling.

Rear-Door Heat Exchangers

Rear-door heat exchanger units are a hybrid solution, combining air and liquid cooling. These units are attached to the back of server racks, using a liquid-cooled coil to remove heat from the air exiting the servers. This method is often used as an intermediary.

Direct-to-Chip Liquid Cooling solution for CPU in a Data Center

Close-up view of Direct-to-Chip Liquid Cooling

Data Center Liquid Cooling Cons

“If liquid cooling is so great, why haven’t we implemented it in every data center?” you may be asking yourself. The answer is simple: we haven’t perfected the technology. There are still a number of cons that make this solution more of an option for massive data centers who are willing and can afford to take the risk.

Higher Initial Setup Cost

Implementing liquid cooling in data centers requires a substantial initial investment. This includes the cost of the cooling system itself, such as pumps, pipes, and liquid handling units, and potential modifications to the existing infrastructure to accommodate these new components.

Complex Maintenance Requirements

Liquid cooling systems are day-and-night more complex to maintain than traditional air cooling systems. They require regular monitoring for leaks, proper handling of the cooling liquids, and maintenance of additional components like pumps and liquid distribution systems, necessitating specialized skills and training (more initial expense). Moreover, modern servers that use denser equipment and computers require crane-system assistance for immersion cooling setups, which can be a massive infrastructure endeavor for data centers considering making the shift. 

Risk of Leaks and Liquid Damage

There is an inherent risk of leaks in any liquid cooling system, which can significantly damage expensive data center equipment. Ensuring leak-proof systems and having emergency response plans are essential, but they add to the operational complexity and costs.

Should Your Data Center Opt for Liquid Cooling Solutions?

Probably not. With the current tech and innovation, upgrading to a full liquid-cooled data center can be incredibly expensive with many unknowns. Even apart from its complexity and cost, there are no currently established standards for data centers to follow. However, we’re not saying that it’s a bad idea. 

Liquid cooling data centers have their place in the tech world, but it’s mainly for data centers ready to shell out billions of dollars. The ones eager to be at the forefront of the industry and pave the way for better big data analytics, AI computations, and cloud services. 

For edge computing and businesses requiring a more straightforward, more reliable solution – Modular Data Centers and All-in-One Data Center Cabinets can provide the same benefit without the hefty price tag. 

Are Liquid-Cooled Data Centers the Future 

Based on the current forecast, it looks like it. 

The global data center liquid cooling market is projected to grow from USD 2.6 billion in 2023 to USD 7.8 billion by 2028

But is it for every data center operator? Not at the moment. 

In the future, as more and more innovations come up, standards are created, and OEMs create more liquid-cooled-stable equipment, liquid cooling will become a more dominant cooling technology due to its efficiency and eco-friendliness. In the meantime, there are other ways you can increase airflowcontact us to find out more!

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Hot and Cold Aisle Containment in Data Centers

Feature Hot and Cold Aisle Containment in Data Centers - AnD Cable Management Blog

Data centers are often made up of hot and cold aisles, and the design of the hot / cold aisle data center is far from new. However, the traditional setup causes warm air exhaust from one aisle to flow into the air intake of the next, meaning that the overall efficiency of the data center is impacted. And really, that’s what hot and cool aisle containment is all about.

Hot and Cold Aisle Containment in Data Centers - AnD Cable Management Blog
Balancing hot and cold aisles is more important than ever to running an efficient data center

As rack density increases, especially in edge data centers and hyperscale data centers, the need for efficiency increases. This is also impacted by the fact that there are more green data centers, who may be generating their own energy using solar or other renewable resources. 

How does containment work and how does it impact your data center?

Remote Monitoring and Temperature Control

Of course, before we get to containment itself, it’s a good reminder to revisit physical layer monitoring. To know how effective any containment effort is, it’s necessary to monitor temperatures. This is most often done with temperature indicating panels, three per rack at the top, middle, and bottom, so that intake temperatures can be monitored regularly.

Of course, someone entering the area to manually check temperatures is yet another disruption to airflow, so remote monitoring as a part of physical network security is essential. This allows managers not only to monitor these temperatures, but receive alerts and take action if something goes wrong. 

A150 Remote Physical Layer Network Security Monitoring Elements
The A150 Remote Physical Layer Remote Monitoring system tracks temperature among many other elements that reduce risk and increase efficiency in data centers

But the most important fact for this discussion is to know what temperatures are so that efficiency and the effectiveness of containment can be monitored.

What is Aisle Containment?

Aisle containment is essentially isolating aisles by relative temperature. Essentially it means placing doors at the end of each aisle, and then adding panels, or barriers, from the top of the cabinet upwards. 

The more airtight this containment is, the more efficient cooling can be, and the easier it is to manage airflow. It’s pretty simple, but there are a couple of different approaches, each with its own pros and cons.

Hot vs. Cold Aisle Containment

There are two ways to manage aisle containment: hot and cold aisle containment. And they work exactly the way they sound.

  • Hot Aisle Containment: Hot aisles are contained, leaving the rest of the room at a more comfortable cool aisle temperature. It’s also easier to manage in many cases.
  • Cold Aisle Containment: Cold aisles are isolated or contained, which means the rest of the room stays at the warmer hot aisle temperature. This can also make getting the right amount of airflow tricky due to pressure changes, but managed properly it can deliver the most uniform temperature air to servers.. 

Choosing the right type of aisle containment for your data center depends on your situation, but there are some differences between new data center construction and retrofitting an existing data center.

Retrofitting vs. New Data Center Construction

In the case of a new data center, most of the time hot aisle containment is the method of choice. This is easier to set up in a new data center, as that allows you to start with the type of containment you need, and to set up HVAC systems and sensors to accommodate that. 

This creates an easier environment for technicians to work in when necessary, and is overall a more efficient choice. However, things are different when it comes to existing data centers

Existing data centers are easier to retrofit with cool aisle containment. While there is some additional monitoring, the way cooling systems work simply means this process is simpler in a currently operating system without creating expensive downtime for making moves and changes and installing containment. 

That doesn’t mean that no new data center will be built with cool aisle containment. It simply means that hot aisle containment is a more frequent choice. 

Partial Containment Solutions

When it comes to retrofitting, sometimes full aisle containment in either format is not possible. In those cases, partial containment is a solution. How is this achieved?

Often plastic strips can be used, similar to those you would go through walking into an industrial freezer or even certain restaurant kitchens. These can be hung at the end of aisles and from the tops of servers to the ceiling, just like other containment methods.

While not as effective, partial containment can be easy to retrofit and implement, and in some cases is about 75% as effective as full containment. For existing data centers looking for a quick and inexpensive efficiency solution, partial containment is a viable option. 

But containment is just a part of rack cooling solutions, and there are some new and exciting ones. 


WHITEPAPER – Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs

Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs FREE Guide - AnD Cable Products

Smart optimization can help you increase rack space and realize significant equipment cost savings. Read our step-by-step guide that shows you how – and how much you could save.

  • How Much Rack Space You Could Save
  • How to Optimize for Maximum Efficiency
  • Savings for New and Retrofit Installations
  • Overall Cost and Space Savings Post-Optimization

The Addition of Liquid Cooling

Data center cooling has evolved from older, inefficient systems to more contemporary ones in a relatively short period of time. However, one thing that has been around for a while but is experiencing a boom in denser, modern data centers is liquid cooling. 

Why? Well, in most cases liquid cooling is more efficient than air cooling in data centers, and when the two are used in conjunction, generally the best results can be achieved. The larger data centers get, the more power they consume, the greater the push towards a blended approach to cooling that not only saves power and is better for the environment, but prolongs the life of equipment and saves space as well. 

But even with the addition of liquid cooling, it’s all about efficient use of rack space and the airflow around them. 

It’s All About Airflow

No matter what kind of aisle containment is used, and no matter how efficient the cooling system, saving space, improving efficiency, and keeping things organized, maximizing rack space efficiency and airflow is vital.

That’s why data centers choose ZeroU racks and cable management systems. They not only help avoid the spaghetti mess and all the cable issues that can arise from it, but also help maximize airflow and save significant rack space in any system.

Whether you are retrofitting a data center or engaged in new construction, we have the rack system that’s right for you. 

Contact AnD Cable Products today for all of your cable, rack, and physical network security needs. We’d love to start a conversation about the right solution for you. 

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on Leave a comment

The Data Link Layer – How DAC and AOC Cables Can Work For You

Feature - The Data Link Layer - How DAC and AOC Cables Can Work For You - Cable Management Blog

As the need for data storage and speed increases, the need for hyperscale data centers has increased. So has the need for edge data centers as well. While large-scale centers serve companies like Amazon, Microsoft, and Google, other organizations are looking at smaller data centers closer to the end-user. In both cases, the data link layer of the data center is critical. Enter Direct Attach Copper (DACs) cables and Active Optical Cables (AOCs).

The Data Link Layer - How DAC and AOC Cables Can Work For You - Cable Management Blog
The data link layer of the data center is critical to ensuring your resources and used to their full potential

What is that data link layer? It’s the physical layer, the connection between servers that ensures all the computing resources are used to their full potential. The speed and integrity of these connectors can make a huge difference. 

They include Direct Attach Copper (DACs) cables, Active Optical Cables (AOCs), and fiber optic cable assemblies connected into transceivers throughout the data center. How does each one work, and why are they so critical to installation, maintenance, and deployment?

The Need for Speed

There are two aspects to the need for speed: the need for speed in shorter cables between servers, and the need for speed over longer distances. Different kinds of cables work differently in each instance. 

For example, DACs are most often used over short distances, connecting units in the same server rack. They can be active or passive – active connections are part of signal processing circuitry, and passive connections simply carry power. In the case of a DAC, the cable is made of copper rather than fiber. 


WHITEPAPER – Understanding Stranded and Solid Conductor Wiring in Modern Networks

Understanding Stranded and Solid Conductor Wiring in Modern Networks - AnD Cable Products Whitepaper

An overview of the differences between stranded and solid conductor wiring, the properties of each and the best cable type to use in a variety of typical settings.

  • Types of Stranded and Solid Conductor Wiring
  • Comparison of Electrical Properties
  • Factors Impacting Attenuation / Insertion Loss
  • Choosing the Right Cable


AOCs usually connect devices within the same row, but they cover longer distances than their copper cousins. However, they do not work in End of Row (EOR) or Middle of Row (MOR) configurations where certain types of patch panels are used. They are usually provided in fixed lengths from a few meters long to more than 100 meters. AOCs are active and include transceivers, control chips, and modules.

Both are fast, similar in speed to optic fiber cables, but that speed can be compromised by cable damage or in the case of DACs, electromagnetic interference. Both must be tested with a tool that can accept dual SFP/QSFP transceivers and generate and analyze traffic.

So how do you test them? Well, there are methods that include automation, but there are other factors to consider. 

Automation Matters

 Speed drives us to DACs and AOCs in some cases, but they can become damaged in a variety of ways. This often doesn’t even happen in the installation process, but in the shipping and handling before they even arrive at the data center. Sometimes it happens if they are stored and moved frequently. 

So the first place to test them is before installation. This ensures they are working before they are put into service. It’s easy to see how testing all cables at installation can be costly and time-consuming but not testing early can be costly later on. 

The solution is rapid, automated testing that can be done by running a test pattern where the results can be compared to a Bit Error Rate (BER) threshold. DAC and AOC cables including breakouts usually have a BER rating on their datasheets, especially when they are meant to be used with devices implementing the RS-FEC algorithm.

The tests only take a minute per cable and result in reports including a cable identifier, such as the serial number, identifying clearly any faulty equipment. 

Proper Power Planning

What’s the other advantage of DACs and AOCs? Energy savings. Point to point high-speed cables take less power and can save money, especially at scale. While DACs offer more dramatic numbers per cable, AOCs offer savings as well when multiple transceivers are replaced by cables. 

They’re not ideal for every case in every data center, but where they can be used as a key part of deployment, they can provide significant energy savings.

Living on the Edge Deployment

The other argument for DAC and AOC deployment and testing at installation exists on the edge. More Edge deployments force centers to increase speed, security, and efficiency at the same time as they minimize latency.

Opting to wait and address any connectivity issues during troubleshooting results in costly mistakes and skipping troubleshooting steps in favor of speedy repairs, sometimes those that are not necessary. Not only is this costly – cables can vary from tens of dollars to thousands but it can also lead to confusing labels and the increased probability of unplugging a live cable.

The fact that DACs and AOCs can be tested so quickly and easily at the time of installation is another great argument for their use in the data link layer. But no matter what cable configuration your data center uses, from point to point high-speed cables to other fiber and optical options, the management of that data link layer is critical to smooth data center operations.

Looking for High Speed Cables?

WD 25G SFP28 SFP+ DAC Cable - 25GBASE-CR, SFP28 to SFP28 Passive Direct Attach Copper, Twinax Cable

Ready to start optimizing your data link layer? Have questions about what cables might be right for you and your application? Whether you are deploying a brand new data center or making moves and changes, we’re here to help. Contact AnD Cable Products today for more information. We’re here to help every step of the way. 

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Faster Polymer Plastic Cables? Not So Fast!

Faster Polymer Plastic Cables? Not So Fast - AnD Cable Management Blog

Just about a year ago a group from MIT demonstrated a polymer plastic cable the size of a human hair that could transmit data faster than copper – much faster. 

How fast? Well, they recorded speeds of more than 100 gigabits per second! So where is this new technology and where is it headed? Well, here are some answers for you.

Faster Polymer Plastic Cables? Not So Fast - AnD Cable Management Blog
MIT demonstrated a plastic polymer cable the size of a human hair. Photo: MIT, https://news.mit.edu/2021/data-transfer-system-silicon-0224

The Need for Speed

First, perhaps we need to qualify what this speed is, and why computers and data centers need it. 

The first big deal is that these cables act like copper – they can directly connect devices without the need to reformat data. While standard fiber cables are faster, they require a converter to change light signals to electrical signals at each end of the connection. 

Of course, there are a lot of immediate uses for faster cables like these, including in data centers. Artificial intelligence applications like self-driving cars, manufacturing, and countless other applications where data provided as close to “real-time” as possible makes a huge difference. 

But of course, as with all such applications, speed is not the only factor.

Distance

At the moment in a laboratory setting, these cables are only good for short distances, not long ones. That doesn’t mean researchers are not confident in the impact these cables can have. 

Think of a polymer plastic cable that is both durable and lightweight, and can transmit terabits of data over a meter or beyond? Theoretically, this is the possibility, with the idea that such cables could replace USB and even the faster USB-C cables. 

Even at shorter lengths, such cables could be exceptionally useful for transferring data between more than one chip inside a device. The thinner fibers could be used to revolutionize these applications as well, making even smaller and more efficient devices possible. 

We Have the Power

The problem as it currently exists is that transferring data through copper cables consumes more and more power, to the point of diminishing returns, and such transfer generates heat – a lot of heat that must be dissipated and can actually cause damage to cables. 

The fiber optic alternative is not always compatible with silicon chips without the light to electronic transfer mentioned above. The idea behind polymer plastic is to save energy, generate less heat, and still allow for compact connections. 

If this is such a great idea, why is it not on the market yet?

From Laboratory To Market

To transfer such technology from the lab to the market takes a lot of work and requires some potential changes. First, the technology needs to be tested and perfected at a higher level. Since the concept has been established, other labs are now working on it as well, and this could be the fastest part of the process. 

But there is more:

  • New standards would have to be developed for IEEE, established, and agreed upon
  • Potentially, new connectors would need to be created for these cables to interface with other chips and other devices
  • The manufacture of new cables needs to be established at scale before they can become commonly used in any application.
  • A supply chain or the use of existing ones must be established to get cables from the plant to the end-user.

Does this sound like a lot? It is, but it has been done before. The question is, what do those who are building data centers – and would use these cables on a regular basis – think?

The Future is Now

“The need for speed has never been so great,” Bill Lambert, a data center engineer told us. “Ten years ago, no one would even have been talking about devices that would need this kind of speed. We would have told you we would never need that capacity.”

And he’s right. Many of the devices we now use every day, and their speeds would have been unimaginable before, let alone the amount of data we use. But the more we look at the uses for real-time data, the faster we need to get that information from one place to another. 

“It’s like the work from anywhere revolution,” he told us. “The last two years have totally changed what data transfer and speed look like, inside and outside of data centers. It’s a sure bet that the next few will revolutionize these ideas again.”

In an ever-changing field where speed and data matter more than ever, science has just begun to catch up with what we need. And we’re lucky enough to be a part of it. 

Have a question about updating the infrastructure in your current data center or want to learn more about building the infrastructure in a new one? Contact us here at AnD Cable Products. We have everything from the cable management you need to remote monitoring and more. 

We’re glad to be your partners going forward to tomorrow and beyond. 

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Extreme Ultraviolet (EUV) Lithography – Keeping Moore’s Law Alive

Extreme Ultraviolet (EUV) Lithography - Keeping Moore’s Law Alive - Feature

In 1975, looking at the next decade, a guy named Gordon Moore revised his previous forecast of the number of integrated circuits in a microchip doubling every year to doubling every two years. Moore was not a prophet, nor a brilliant data analyst, but as his prediction held true, it later became known as a law. 

The law has become more of a guide, influencing the policies for research and development of the largest companies and chip manufacturers in the world. And it, and a new machine helping to keep Moore’s law alive, are what your iPhone and those robots from Boston Dynamics with the best dance moves have in common.

Macro photo/Shutterstock.com

Let There Be Light

First, we must understand lithography, an analogous method for making printed circuits. Technically defined, lithography is printing on a plane surface treated to repel the material being printed except where it is intended (or in the case of circuits, needed) to stick. 

The use of light for this treating and etching process is common, but one machine, built by ASML, a Dutch company that has cornered the market for etching the tiniest nanoscopic features into microchips with light, is playing a huge role in keeping Moore’s law viable. 

ASML introduced the first extreme ultraviolet (EUV) lithography machines for mass production in 2017, after decades spent mastering the technique, and the machine needed for the process is to put it mildly, massive and mind blowing. It’s expensive too, with a sticker price of around $150 million. TSMC, Samsung, and Intell are initial customers. 

Amazon Prime won’t be enough to get the massive machine delivered, unless you have 40 freight containers, three cargo planes, and 20 trucks on standby. What’s the big deal with this machine, and why does it (and it’s future children) matter?

How it Works

Think of a machine the size of a bus with 2 kilometers of cabling and over 100,000 parts. Inside are a series of nano-mirrors polished to precision that literally project extremely focused ultraviolet light into future chips to etch features that are often just a few atoms wide. That’s right, atoms. 

This means chips with components smaller (and more durable in many ways) than they have ever been. Smaller chips that are just as powerful, nano-sensors that are just as sensitive or accurate in a fraction of the space they take up now, and more will enable chips to get tinier, lighter, and more powerful than ever before. 

The Moore’s Law Limit

How small can chips get? Some think that Moore’s law is reaching the point where it is no longer viable, for three key reasons

  • Electrical leakage – As transistors get smaller, they at first become more efficient, but as they have reached nano-size, the transistor often can’t handle all of the electricity, and that means heat, and heat means potential damage to the transistor and maybe even the entire chip in some circumstances. Therefore, we can only decrease the size of a chip as we increase cooling power.
  • Heat – The electrical leakage and resulting heat means that one of two things must be limited: the amount of voltage or the number of the transistors in a given chip, thus limiting the power. The technology of Extreme Ultroviolet Lithography may offer some help in this area, but that remains unknown.
  • Economics – The price of this machine is just one factor. As chips get hotter and need more cooling the cost of keeping a data center at a viable temperature goes up, and that cost must be passed on to someone, generally the consumer. And businesses also want to extend the life of their equipment, ensuring it lasts as long as possible. Faster equipment with a shorter lifespan may not be as appealing to the average buyer or data center manager.

What does all this mean when we break it down?

Well, the data center of tomorrow may be a fraction of the size of those we have today. Or it may be equally as large, but able to store and deliver data at rates we can’t even imagine. Equipment, servers, remote sensors, everything may keep shrinking, to a point. But there will be a point when Moore’s law will no longer be valid or achievable, and that day may come sooner rather than later.

Are you running the data center of today, but looking forward to the data center of tomorrow? Are you interested in the latest remote monitoring and cabling solutions? Contact us at AnD Cable Products. We’d love to talk about what tomorrow looks like, and how we can help you head the right direction today. 

WHITEPAPER – Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs

Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs FREE Guide - AnD Cable Products

Smart optimization can help you increase rack space and realize significant equipment cost savings. Read our step-by-step guide that shows you how – and how much you could save.

  • How Much Rack Space You Could Save
  • How to Optimize for Maximum Efficiency
  • Savings for New and Retrofit Installations
  • Overall Cost and Space Savings Post-Optimization

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Edge Data Centers – Space and the Final Frontier

Edge of space - Edge Data Centers - Space and the Final Frontier - Cable Management Blog

Computing on the edge: it seems that everyone is doing it, from big industry to manufacturers, from ISPs to Cloud Computing centers. When you can locate computing and analytics power closer to machines connected via the IoT and other data sources, the faster you can gather data, and the more data you can store and analyze in a timely manner. For some, edge data centers seem like the final frontier for data.

Feature Edge Data Centers - Space and the Final Frontier - Cable Management Blog

This has resulted in data centers that vary in size, from the size of a very large cabinet to those contained in the space of a small shipping container. But like any journey to the edge, there are challenges and risks. There are two primary ones we will address here:

  • Temperature – Because of the small spaces edge data centers often occupy, airflow and temperature control can be tricky.
  • Space – Smaller size also means that saving space is critical, and on the flip side, can also enable more airflow and indirect cooling in a confined area.

In this way, the two primary challenges are related, and often a solution that mitigates one will also help mitigate the other. Let’s take a quick look at each of these.

Controlling the Environment on the Edge

The temperatures that edge data centers operate at are critical. And there is a huge difference between the cooling we need for a building designed to keep people comfortable, and a building designed to serve machines. Think of it this way: if someone opens the door to your office, you may feel a blast of warm or cold air, depending on the time of year. Your discomfort disappears quickly when the door closes, as the HVAC system takes over, and brings air back into the broad temperature tolerances humans can endure.

However, what happens when you go to an edge data center and open the door? The answer is, it depends on where it is. Large, brick and mortar data centers can be located in areas with minimal environmental challenges and low risk of natural disasters. But edge data centers must be located, well, where they are needed. That means in dusty and dirty environments, areas with extreme temperature fluctuations, and more.

There are really only two choices:

  • Develop and deploy equipment designed to withstand extremes, at a higher price point. A good example is cellular equipment like that developed by AT&T. However, the cost of this equipment is too high for standard edge data center deployment at scale.
  • Work with existing, readily available equipment and use unique strategies to combat environmental changes at a small scale, including using tents or shrouds for entry and exit, using handheld temperature and humidity monitors to evaluate current conditions, and developing strategic plans for unexpected events.

Another part of the solution is to use remote monitoring, AI and the IoT in edge data centers to mitigate the need for human intervention. Monitoring the health of equipment and preventing disaster in the first place is one of the keys to efficient management of edge data centers.

This is but one of the challenges data center managers face. The second is the efficient use of available space.

Saving Space

While cooling and environmental control are critical, so is the efficient use of space. This can result in increased airflow and easier HVAC solutions while also enabling more servers to be installed in the same amount of space.

This involves a few key steps:

  • Rack Selection – Whether a data center uses 23” or 19” racks, there are rack solutions that take up less space, and are also able to use better rack management options.
  • Cable ManagementZeroU horizontal cable managers makes more room for servers in a single rack, and they prevent the “spaghetti mess” that can happen in server racks, and be especially problematic in edge data centers that are more compact.
  • Compact Vertical Cable Management11U cable managers also save space and keep cables organized and easy to access should moves, changes, or repairs be needed.

Anything that can be done to save space in an edge data center makes facing the other challenges related to environmental control easier, but it also has another impact: an economic one. The less space you need to get the computing power you need, the more compact your data center can be. Alternatively, this can give you space to scale as needed without creating yet another data center space.

At the edge, there are always challenges, but there are also solutions. From controlling the environment in and around the data center to using the space in the most efficient way possible, with the right equipment, these obstacles can be transformed into opportunities to change not only how much data is collected and how quickly it can be acted upon, but where it happens as well.

Do you have questions about saving space in your edge data center? Are you looking for remote monitoring solutions? Then contact us here at AnD Cable. We’d love to start a conversation about how we can help you.

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 3 Comments

7 Considerations When Choosing Fiber Optic Cable

7-Considerations-When-Choosing-Fiber-Optic-Cable-Feature-Image

Fiber optic cable has become the go-to choice for a variety of applications by data center managers. The reasons are many, including advances in cable technology that make it an even better choice. But there are several things to consider when choosing fiber optic cable to ensure it’s the right fit for the application. Here are seven of the most important ones.

Jump to Section:

  1. Distance
  2. Interference
  3. Bandwidth
  4. Security
  5. Cable Size
  6. Cost
  7. Durability
Choosing Fiber Optic Cable - Discover 7 Considerations - Cable Management Blog
Choosing Fiber Optic Cable – Discover 7 Considerations

Distance

One of the big advantages of fiber optic cable is the loss factor: fiber only loses 3% of data over 100 meters compared to much greater losses with copper cables like CAT6 cables. While copper may be a great choice for short distances, the longer the cable needs to be, the bigger advantage to choosing fiber optic cable.

So the first factor to consider when choosing fiber optic cable is the distance the data must travel.

Interference

Fiber is fully resistant to interference from various sources like power lines, lightning storms, and even deliberate scrambling and disruption. So while the first consideration is how far the data must travel, the second consideration is where the data may travel. In data centers, whether cables are managed by running overhead or the less common instance of running through underfloor spaces, there can be sources of interference in or near that path.

This is also true in edge data centers, where everything is more compact and closer together. This is also true in modular data centers, and the right fiber cable can ensure that you can scale quickly and easily as needed. As we move toward collocation and hyper scaling, this becomes even more important.

Bandwidth

Data centers must be prepared for the future, and the bandwidth your cables can handle is a big part of that. For instance, the rise in the use of OM5 cables over OM3/4 especially in new builds is an indication that data centers are preparing for increased 5G and traffic from VR and AR applications.

This is essential to prepare for the coming 400G demands, especially in Edge data centers. As “work from home” or “work from anywhere” becomes the norm, even smaller residential data centers will be inundated with new traffic, as we saw through the COVID-19 pandemic. It seems that more companies are shifting to hybrid workforces, moving their corporate headquarters out of city center areas that are more expensive to rent, and even enabling partially or fully remote workforces.

Combine that with increases in “shopping from home” and multiple streaming devices, and speed and bandwidth are more important than ever.

Security

Of course, security is one of the top concerns for any data center. A single breach can put an entire company out of business, and result in serious issues if the data of thousands of customers is compromised. While most security issues are found in software and in the human factor (like compromised passwords) there is still a certain amount of risk in physical hardware.

However, fiber cables are difficult to compromise without the intrusion being detected, which means at the very least, using fiber cables, especially in areas where they could be potentially compromised physically, is a vital part of an overall data center security plan. Choosing the right cable in the right place can make the difference between protecting your data center’s security and digital assets, and a potentially costly data breach.

Cable Size

Over time, thinner fiber cables that carry as much data as their larger counterparts have been developed, making it practical to use fiber nearly anywhere. These thinner cables can also be bent and routed easily, saving space in your cable management systems.

Thinner cables also contribute to higher airflow and more efficient cooling, another potential area of cost savings. Fiber cables can also be bundled, organized, and labeled easily, preventing the spaghetti mess that often accumulates at the rear of server racks. Of course, this can also be prevented by having a better cable management plan in place.

In short, consider the size of cable you are using in any given area, and weigh that with other factors like distance, interference, and bandwidth.

Cost

Above, we mentioned OM5 being the future of fiber cables, but their wide adoption will come as they are produced in various lengths and sizes on a larger scale. This is because at the moment, they are produced to custom specifications. However, as OM3/4 are still viable and compatible with OM5, you can update your data center in incremental stages, and still utilize the less expensive OM3/4 cables as needed.

You’ll want to weigh cost against performance. Yes, OM5 is the best way to prepare for the future, but that can be done in cost-effective stages as your data center changes and grows. Replacing cables when you are doing moves and changes, or a new build will save you money in the long run.

Durability

Choosing fiber optic cable is easy when it comes to durability, as it’s an extremely durable cable for the most part. It is important that you evaluate where and how the cable is being used when choosing the proper cable. Where bends happen, and in an area where there may be more moves and changes than normal, you will want the most durable cable for that application.

Fiber comes in different diameters and insulation levels, and so you should be sure to choose the right one for that particular application. Evaluate several ways you can improve cable use to increase efficiency and scalability.

When choosing fiber optic cable that’s the best fit in any given application, be sure to take all of these factors into consideration. Need more information? You can check out some of the great information on our blog and in our various white papers, but if you still have questions, reach out to us. We’d love to start a conversation about how we can meet your data center cabling needs at any scale.

Ultimate Data Center Cable Labeling System

Ultimate Cable Labeling System - Epson Labelworks PX Printers and AnD Cable Products UniTag Cabel Labels
  • Maximum efficiency – our system bundle gives you everything you need, portable to where you need it
  • Industry-leading savings – both products have been designed to save you money, every time you use them
  • Clarity and transparency – printed labels ensure clarity and generous plastic cable label sizes allow for 3 lines of information
  • Positively impact uptime – combined, they make cable identification and troubleshooting quick and easy
  • Enable color cabled runs – UniTag color options align to ANSI/TIA 606-B Cable Labeling Standards
  • No-risk! Purchase with 100% confidence – both top-quality products are backed by a Lifetime Warranty
  • Download the Ultimate Cable Labeling System Brochure

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/

Posted on 7 Comments

How to Prevent Data Center Downtime

Data center downtime is no joke. It can literally make the difference between a data center surviving and failing. And a new study by the Ponemon Institute shows that modern data centers and data centers at the edge are more susceptible to downtime than ever before. This is because data centers are much more complex than they ever have been. Most core data centers suffer 2.4 facility shutdowns per year, and some of those last around 138 minutes – more than two hours! Edge computing data centers experience twice as many shutdowns, but that average half the duration of core data center outages.

In addition, it is helpful to remember that although total facility failures occur with the least frequency, individual server or rack failures can also be costly, especially in Edge data centers, where every piece of equipment has some critical function.

At the outset it is also important that we define core data centers and edge data centers. Edge data centers are usually about ⅓ the size of their counterparts, although the term edge does not refer to size. Edge refers more to the data center location, generally closer to where the data center is needed to increase speed and response times, and save bandwidth.

Jump to Section:

Data Center Downtime

Conflicting Priorities

Data center managers are faced with decisions about efficiency, the transition to Net Zero carbon emissions, and avoiding redundancies whenever possible, but this also can leave them susceptible to downtime events if a problem occurs. This is illustrated by the causes of downtime: UPS battery failures, human error, equipment failures, other UPS equipment failures, and cyberattacks.

Respondents to the survey revealed that over half (54%) are not using best-practices, and that risks of data center downtime are increased because of cost concerns.

The Cost of Downtime

While cost concerns often increase the risk of downtime, the actual cost of downtime can be much greater. According to a 2014 survey by Gartner, facility downtime cost an average of nearly $5,600 per minute, or between $140,000 to over half a million dollars per hour depending on the organization size. These costs continue to rise, with more recent statistics from the Ponemon Institute survey mentioned above calculating average costs at nearly $9,000 per minute.

It’s more than just the money costs though. The real cost comes in reputation and customer service. Data centers that suffer above average downtime are much more likely to go bankrupt. Uptime is perhaps more critical than it ever has been, and customers remember problems far more easily than they remember reliable service over time.

So what do we do to prevent data center downtime?

How to Prevent Data Center Downtime

There are solutions to downtime issues, and many are known to data center managers. However, they are easier said than done. Here are a few of them:

  • Adopt best-practices – The fact that most data centers know they are not following best-practices reveals they know what to do, they are just not doing it.
  • Invest in new equipment – Equipment failures come in outdated equipment not up to the current needs of the data center. Replacing it is one of the easiest ways to reduce or eliminate downtime.
  • Improve your training – Be sure that all employees, both existing and new, are aware of best practices and what you expect of them on the job. Make training comprehensive and focus on outcomes and skills that build long-term success.
  • Improve your documentation – Your data center plans, including power, cabling, cable management plans, and others should be thoroughly documented and available to employees. If not, in the words of Captain Picard, “Make it so.”
  • Don’t fight redundancy – Redundancy is a good thing for the most part. You certainly don’t want to overdo it, but you do need to have contingency plans and equipment in case downtime does happen.

Of course, these solutions are simplified, nor are they always possible for data center managers to achieve with the resources they have available.

There is Room for Improvement

The takeaway from this data is twofold. First, data center downtime at these rates are unacceptable for most organizations. The second is that there are solutions, and there is plenty of room for improvement. Among the solutions mentioned above, there are some critical elements.

  • Redundancy – This has been preached from the beginning for both core and edge data centers, yet half of data centers have issues in this area. As a result, there is a trend toward more redundant equipment, especially at the edge, as large and small operations seek to better manage data center downtime.
  • Remote monitoring systems and AI – The other advancement that seeks to solve the issue of human error and detect equipment issues before they become a problem is remote monitoring and AI. Machine learning can help data center managers fix issues before downtime occurs, and helps them respond faster when a problem does occur.

Simply adding these two things can take data centers a long way toward greater uptime and more reliable service. After all, this is the goal of both core and edge computing.

Whether you manage an existing data center or you are considering starting one from scratch, we here at AnD Cable Products are here for you. We can help you with everything from cable and rack management to labeling systems and remote monitoring. Have questions? Contact us today. We’d love to start a conversation about your specific needs.

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/

Posted on 1 Comment

Modular or Traditional Data Center Builds – Which is Better?

Modular or Traditional Data Center Build - Which is Better? Cable Management Blog

“The future of data centers is modular,” one popular website states. “The traditional data center is not dead,” says another. Who is right? What is the future of data centers? Is one better than the other, and if so, why?

Here are some pros and cons, and some potential answers data center managers might want to consider. 

Jump to Section:

Modular and Traditional Data Center

Modular vs. Traditional Data Center PUE

One of the first things we talk about with modular or traditional data centers is PUE, or Power Usage Efficiency. Most of the time, modular data centers have a lower PUE. However, there is a cost associated with that number.

Traditional data center builds often have a higher PUE initially because there is space for expansion and adding additional equipment. This can sometimes come with higher HVAC and other costs until the data center is at capacity and running at maximum efficiency. We’ll talk about that factor more in a moment. 

For modular data centers, because they are constructed with tight specifications and already at an efficient capacity per module, the PUE is lower from the start. All components are easily matched, and compact spaces are easier to control when it comes to cooling, humidity, and other factors.

What is the downside? When a brick and mortar data center is up and running at capacity and the design has been well executed, PUE levels can be similar, and it can be much simpler to make moves and changes without additional modules and construction. 

Security

As with PUE, there are two sides to this coin. The modular data center can be easier to secure, as they are more compact and self-contained. When installed behind a secure barrier with video and other surveillance measures, the physical security of a modular data center can be assured.

The flip side? Modular data centers may evolve and require additions over time, meaning the physical space will also have to be modified. Proper planning can mitigate this issue, but a traditional data center build can be easier to manage from this perspective, with security built into the construction itself, along with remote monitoring and other security features that must be handled differently with modular data centers. 

The argument over which is better can go either way, but the permanence of a traditional data center build often wins out when it comes to security discussions.

Modular vs. Traditional Real Estate

When locating a data center, we have talked about things like accessibility to a green power grid, the ability to construct your own green energy backups, and more. Real Estate that satisfies all of those requirements can be hard to find, and prices reflect that premium. 

So in this case, the more compact modular data center build has some distinct advantages. The less real estate you need, the less initial costs will be to purchase (or lease) space for the data center. This also has an impact on another factor: the cost of the build. 

Building Costs

Constructing a modular data center is much cheaper than constructing a traditional one, effectively 30% less. That is a huge number when you talk about initial costs. Combined with the lower cost of real estate, deploying a modular data center is much more efficient for those looking towards hyperscaling and co-location. 

This has been shown to be especially true as more “work from anywhere” options become available, and the need for high-speed data center capacity shifts from city centers and similar areas to residential and suburban ones.  

This leads us to our next advantage:

Deployment Speed

The time needed to construct a traditional data center is much greater than that needed to deploy a modular data center. The average data center takes 18-24 months from start to finish, but you can save around 30% of that time by going the modular data center route.

In part, this is because you avoid traditional construction delays due to bad weather, seasonal construction, and more.  

This is not to say modular deployment is better – it is simply faster. It could be argued that a traditional build will last longer and the overall construction will be of higher quality, but that is not always the case. Many modular data centers are created with a similar lifespan in mind and can last just as long as a traditional build.

So which one is better?

The bottom line when it comes to which one is better, a modular or traditional data center build, the answer is, it depends. Ask these questions:

  • How urgent is the need for this data center? 
  • Where will the data center be located? What is the costs associated with a greater holding of real estate?
  • What is the purpose of this data center? Is the need for moves and changes anticipated?
  • What kind of security is needed, and what is possible in the data center location?
  • What is the long-term plan for this data center?

The answers in your situation may vary, but as much as the traditional data center is not dead, the modular data center is on the rise, and for many situations, it’s the best option. 

No matter whether you are creating a modular data center or doing a traditional build, your rack and cable management matter, as does your labeling system and your physical layer security. At AnD Cable Products, we can help with all of these things. Give us a call today, tell us about your situation, and we’d be happy to have a conversation about how we can help

We’re here for all of your data center needs. 

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/

Posted on 1 Comment

4 Steps to Prepare Your Data Center for Net Zero Carbon Emissions

4 Steps to Prepare Your Data Center for Net Zero Carbon Emissions - AnD Cable Management Blog

The race to net zero carbon emissions is on – our economy and our world depend on it. The data center industry, one that tends to gobble up lots of power is at the forefront of a number of initiatives being implemented around the world. How will you prepare your data center for net zero carbon emissions? Here’s 4 steps that will get you started.

Jump to Section:

4 Steps to Prepare Your Data Center for Net Zero Carbon Emissions - AnD Cable Management Blog

The Situation Now

Data centers first became the focus of Greenpeace and other groups back in the mid-to-early 2010 boom. The focus at that time was on enterprise-level data centers – the big guys, in other words. The fact that data centers used lots of power became evident and so the move toward minimizing that impact grew in both urgency and popularity.

So much so that the position of Chief Sustainability Officer (CSO) grew to overtake the emerging position of Chief Security Officers. Those in charge of cybersecurity ended up having to change their title to CISO (Chief Information Security Officer) because CSO had already been widely recognized.

Since then, smaller edge computing data centers have become the new focus. With COVID hastening the transition, today, cloud computing, AI, remote monitoring and other data center management trends have now established a level of control and sustainability not previously thought possible.

Thanks in part to these technological developments, net zero carbon emissions now feels more achievable and less like the plot-line in a futuristic sci-fi flick. So, in what areas of the data center can emissions be reduced?

Step One – Make a Commitment

There are several individual measures that promote sustainability. The key is to take all of those individual components and standards and work them into an ecosystem that supports your emissions goal.

Companies and countries alike are making a commitment to reducing carbon emissions as a part of their brand. However, words are not enough, and these companies – including the largest hyperscaling data center companies in the world – are taking action. Advances are happening quickly in the area of artificial intelligence (AI), remote management and data center design. Being on the leading edge of these developments shows that your data center is part of this commitment to a “greener cloud infrastructure.”

Like most strategies, it’s only once a firm commitment at the top of the organization has been made that the necessary actions can be taken, including providing leaders with the authority to make decisions that align with the goal and that resources, responsibilities and accountabilities are allocated.

Step Two – Use Sustainable Energy Sources

Solar, wind and even hydroelectric power are all sustainable sources of power that can make dependence on coal and other carbon-intense fuels a thing of the past. Companies like Tesla and Microsoft are testing and deploying battery technology that can run data centers longer than ever before, even with no sun or wind available.

This means only using the local grid as your primary power source if sustainable energy is available 24+7. Otherwise, the data center will need to provide at least some sustainable sources of its own, like a solar or wind farm designed to directly support a data center.

Because this is expensive, only the largest, hyperscale companies with large data centers can be 100% self-sufficient. Hybrid solutions could help to bridge the gap, such as supplementing local power supplies with solar and wind on site. Selecting a site that’s close to a local, sustainable power grid should be a factor in choosing where to locate your data center and will support the goal of net zero carbon emissions.

Step Three – Operate at Peak Efficiency

For data centers, not only should your power be sourced responsibly, but your data center needs to operate efficiently. One way to reduce carbon dependency is to simply use less power. Strategies can include initiatives such as pro-active device monitoring to identify ‘Zombie Servers’ – stacks that contribute little to performance, but still use significant resources to maintain.

The efficient and responsible use of power, as covered by the United Nations Sustainable Development Goal 12, is about Responsible Production and Consumption. Some other standards include:

  • PUE (Power Use Effectiveness) – Determined by dividing the power coming into the data center by the power used by the computer infrastructure.
  • LEED (Leadership in Energy & Environmental Design) – A green building certification program that rates building design
  • PAR4 – A new form of power measurement that accurately measures IT equipment power usage to help optimally plan for capacity.
  • ASHRAE (The American Society of Heating, Refrigerating and Air-Conditioning Engineers) – Standards for the temperature and humidity operating ranges for IT equipment and data centers.
  • CCF (Cooling Capacity Factor) – CCF is a metric used to evaluate rated cooling capacity against how that capacity is actually used.

These standards are just a few of those used to rate the efficiency of a data center and are designed to help data centers move toward net zero through more effective use of the power they have available.

Step Four – Build-in Resilience and Agility

The real job of a data center is uptime. Yes, customers want a green data center that is moving toward, if not already achieving, net zero emissions. However, at the same time, they expect that there will be no reduction in service. They expect full uptime, speed, and data protection.

This means that systems must not only be green, but must be reliable and include redundancies, power backups, and other protections, including cybersecurity and physical layer security to protect both customer assets and their data.

The good news is that not only is clean energy better for the environment, but it is also more reliable in many cases, allowing data centers to keep uptime near 99.999% standards. This is a balance that sustainable data centers must constantly monitor, adjust to and plan for.

Net zero carbon emissions is the standard of the future. Your data center can prepare now. Use clean energy, and plan to scale with that energy use in mind. Use the energy you have efficiently and plan for resiliency as part of your transition strategy. It’s what clients and customers and the world deserves.

Have questions about optimizing your physical layer, monitoring and remote control or ways to use your floor space efficiently? Contact Us at AnD Cable Products. We’re here to help.

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/