Posted on Leave a comment

Data Center Liquid Cooling – Is It Time for an Upgrade?

Featured image of a Liquid Cooling Data Center using immersion cooling

As the demand for cloud services, big data analytics and AI computations grows, data centers are housing increasingly dense and powerful computing equipment. This trend has led to higher heat loads, making efficient cooling not only desirable but necessary. In some situations, traditional air-cooled systems, once the backbone of data center cooling, are now being supplemented and even replaced by data center liquid cooling solutions.

In this article, we explore how far our cooling innovations have come and uncover the reality of today’s liquid cooling landscape. We’ll break down the tech news outlet hype around liquid-cooled data centers – what are the options? What makes it special? Is it suitable for every data center? And is this technological shift inevitable? Let’s dive in.

A Liquid Cooling Data Center using immersion cooling technology

Immersion Cooling Technology for Data Centers

Why is Liquid Cooling Superior?

Liquid cooling is superior in data centers due to its higher thermal conductivity – liquids conduct heat up to 1,000 times better than air – allowing it to efficiently remove heat directly from high-power computing components. 

This direct heat removal leads to significantly lower operational temperatures, enhancing the performance and longevity of sensitive electronic equipment. Additionally, liquid cooling systems are more energy-efficient than traditional air cooling, reducing operational costs and a creating a smaller carbon footprint.

Energy Savings

Another core benefit that liquid-cooled data centers enjoy is energy savings. In quantitative research conducted by NVIDIA and Vertiv, data centers that use liquid cooling systems reduced their total data center power consumption by 10.2% – an 18.1% reduction in facility power! From a financial perspective, this reduction is $740,000 less than from power-hungry data centers that consume $7.4 million annually.

Types of Data Center Liquid Cooling Systems

There are many data center liquid cooling systems in place – some more complex than others. However, these three are the most dominant ones in use today:

Direct-to-Chip Liquid Cooling

Direct-to-chip (D2C) cooling involves circulating a coolant directly over the heat-generating components, such as CPUs and GPUs. This method significantly increases cooling efficiency by removing heat directly at the source. D2C systems can use a variety of coolants, including water, dielectric fluids, or refrigerants, depending on the application’s needs and the desired cooling capacity.

Immersion Cooling

Immersion cooling takes liquid cooling a step further by submerging the entire server, or parts of it, in a non-conductive liquid. This technique is highly efficient as it ensures even and thorough heat absorption from all components. Immersion cooling is particularly beneficial for high-performance computing (HPC) and can dramatically reduce the space and energy required for cooling.

Rear-Door Heat Exchangers

Rear-door heat exchanger units are a hybrid solution, combining air and liquid cooling. These units are attached to the back of server racks, using a liquid-cooled coil to remove heat from the air exiting the servers. This method is often used as an intermediary.

Direct-to-Chip Liquid Cooling solution for CPU in a Data Center

Close-up view of Direct-to-Chip Liquid Cooling

Data Center Liquid Cooling Cons

“If liquid cooling is so great, why haven’t we implemented it in every data center?” you may be asking yourself. The answer is simple: we haven’t perfected the technology. There are still a number of cons that make this solution more of an option for massive data centers who are willing and can afford to take the risk.

Higher Initial Setup Cost

Implementing liquid cooling in data centers requires a substantial initial investment. This includes the cost of the cooling system itself, such as pumps, pipes, and liquid handling units, and potential modifications to the existing infrastructure to accommodate these new components.

Complex Maintenance Requirements

Liquid cooling systems are day-and-night more complex to maintain than traditional air cooling systems. They require regular monitoring for leaks, proper handling of the cooling liquids, and maintenance of additional components like pumps and liquid distribution systems, necessitating specialized skills and training (more initial expense). Moreover, modern servers that use denser equipment and computers require crane-system assistance for immersion cooling setups, which can be a massive infrastructure endeavor for data centers considering making the shift. 

Risk of Leaks and Liquid Damage

There is an inherent risk of leaks in any liquid cooling system, which can significantly damage expensive data center equipment. Ensuring leak-proof systems and having emergency response plans are essential, but they add to the operational complexity and costs.

Should Your Data Center Opt for Liquid Cooling Solutions?

Probably not. With the current tech and innovation, upgrading to a full liquid-cooled data center can be incredibly expensive with many unknowns. Even apart from its complexity and cost, there are no currently established standards for data centers to follow. However, we’re not saying that it’s a bad idea. 

Liquid cooling data centers have their place in the tech world, but it’s mainly for data centers ready to shell out billions of dollars. The ones eager to be at the forefront of the industry and pave the way for better big data analytics, AI computations, and cloud services. 

For edge computing and businesses requiring a more straightforward, more reliable solution – Modular Data Centers and All-in-One Data Center Cabinets can provide the same benefit without the hefty price tag. 

Are Liquid-Cooled Data Centers the Future 

Based on the current forecast, it looks like it. 

The global data center liquid cooling market is projected to grow from USD 2.6 billion in 2023 to USD 7.8 billion by 2028

But is it for every data center operator? Not at the moment. 

In the future, as more and more innovations come up, standards are created, and OEMs create more liquid-cooled-stable equipment, liquid cooling will become a more dominant cooling technology due to its efficiency and eco-friendliness. In the meantime, there are other ways you can increase airflowcontact us to find out more!

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Edge Computing – A Contrast to Colocation

Featured image of edge computing server cabinets

Edge computing is an innovative strategy that moves data storage and processing closer to users and data sources. On the contrary, colocation utilizes a third party’s centralized area and data to share resources and space with our clients. Although it may appear that location differentiates the two, there are still many other distinctions that make them suitable for different uses and needs. 

In 2025, the world’s data creation is forecast to hit a new record high of over 180 zettabytes. Of course, this will inevitably increase the demand for low-latency and high-bandwidth applications. As a result, it paved the way for new and improved data processing paradigms like edge colocation. As the name implies, it combines the best of edge computing and colocation to address the drawbacks of both and provide a better and more convenient solution to customer needs. 

Rows of server cabinets used for Edge Colocation

What Is Edge Computing

Edge computing is a method that places data processing and storage at the network’s “edge,” where it’s closer to both resources and users. Since discovering the “edge,” edge computing has become a vital modern technology. After years of relying on huge rooms as centralized data centers, edge computing decentralized the processing across multiple edge nodes or devices to create local networks and servers. Since then, it has provided numerous functions and solutions to a wide range of users. 

Because it reduces the distance between data sources and users, edge computing has a faster response time, less bandwidth consumption, better security, and many other benefits. Since it processes data on-site, it’s highly reliable, provides real-time data, works efficiently on on-demand applications, and more.

Read Specific Use Cases for Edge Computing

Advantages of Edge Computing

Edge computing offers several benefits that make it an attractive and valuable approach in today’s digital landscape:

Reduced Latency

Edge computing reduces data transmission latency to centralized data centers by processing data closer to the source or end-users. This reduction in latency is critical for applications that require real-time data processing, such as autonomous vehicles, industrial automation, and immersive virtual reality experiences.

Improved Performance

Because data processing occurs locally, performance and response times improve, enhancing the overall user experience and allowing time-sensitive applications to function smoothly.

Bandwidth Optimization

By processing and filtering data at the edge, edge computing helps optimize bandwidth usage. The central cloud or data center receives only relevant or summarized data, reducing network traffic. As a result, it saves bandwidth and minimizes the costs associated with data transmission.

Enhanced Reliability

Edge computing improves reliability by reducing reliance on centralized data centers. This function guarantees the continuity of data processing during connection or network failures. Hence, edge computing is particularly essential for mission-critical applications that cannot afford any downtime.

Scalability and Flexibility

Edge computing makes it possible to scale applications efficiently as demand changes. This means that services and applications can be changed at the network edge without requiring significant infrastructure changes for the company.

These advantages of edge computing make it a compelling solution for various applications and industries. In today’s data-driven, interconnected world, edge computing can open new doors, boost efficiency, and improve user experiences.

Person looking at graph with edge computing text overlay

Use Cases for Edge Computing

Edge computing has been beneficial to many use cases, such as the following:

  • Autonomous or Self-driving Cars

Edge computing allows real-time data processing from the vehicle’s sensors. As a result, it enables cars to process information quickly, allowing them to avoid obstacles, make decisions, and navigate autonomously. 

  • Healthcare

It allows accurate data collection and processing from medical devices in real time. Additionally, it’s essential for medical devices that must monitor a patient continuously and aren’t reliant on network connectivity. Furthermore, edge computing can improve healthcare services in rural and remote areas by allowing faster access to patient information, diagnosis, and treatment. 

  • Manufacturing and Industrial

Edge computing can also improve efficiency and productivity in factories and industrial settings. It monitors operations, controls equipment and machines, and performs other real-time tasks. It’s also useful for energy efficiency monitoring, predictive maintenance, and more. 

  • Retail

It is also helpful in processing retail sensors and other applications, allowing faster and more accurate inventory management, better customer service, and even loss or fraud detection. 

  • Human Resources (HR)

Edge computing offers numerous advantageous use cases for Human Resources (HR) departments across various industries. One prominent use case is the integration of edge devices and sensors in the workplace to gather real-time data on employee attendance, well-being, and safety.

Edge computing also makes security more robust for organizations, reducing the amount of data transmitted and processed in the cloud. That means sensitive data are less vulnerable to attacks. HR departments from established companies can deploy more secure tools exclusive to the organization for employee queries, performance, and requests. 

  • Universities

Edge computing enhances a university’s capabilities in processing and analyzing large volumes of data significantly faster. For academic researchers and doctoral students, this means more frequent breakthroughs and innovation.

Edge computing also enhances Internet of Things (IoT) capabilities. By installing servers closer to devices, they can perform better. End users will experience reduced latency, while universities will benefit from less bandwidth consumption.  

Edge computing has become a significant part of many businesses and industries by processing data from sensors, cameras, machines, smart devices, etc. 

What Underlying Concept Is Edge Computing Based On

Edge computing is based on the concept of distributed computing. The idea is that instead of a centralized data center or central cloud, it distributes data processing and storage across multiple devices. Edge computing processes data closer to the “edge,” where the users and sources are. Since it’s not reliant on a central cloud for data processing, it reduces the number of “hops” the data must travel. As a result, it saves on bandwidth, makes real-time responses, performs better, and can function independently even with a poor network connection.

What Is Colocation

Colocation is the method of renting a space from a third-party colocation data center facility. It gives you access to the facility’s resources, infrastructure, and services other renters share. Colocation can be a more cost-effective and secure option than building and maintaining your data center. 

What Is a Colocation Data Center

Colocation data centers are huge facilities that house servers and resources many users share. These centers offer physical security, hardware maintenance, storage, servers, and other efficiency resources. Typically, space is rented per rack, room, cabinet, or area unit. Many companies and businesses prefer colocation, particularly if they need space to house the equipment and wish to avoid the hassle of maintaining network servers and infrastructure. 

Advantages of Colocation

Colocation has several advantages that make it ideal for many companies, such as:

Space and Lower Expenditure Costs

Of course, the most appealing colocation assets are space and cost savings. Whether you’re a startup, a small business, or a large corporation, space is valuable. Colocation provides space and security, power systems, cooling, etc., so you can save on overhead expenses.

Scalability and Flexibility

Because you can easily rent more space and add more applications, scaling as your business expands is also convenient.

Skilled Staff and Maintenance

Experts and personnel in data centers can help monitor and maintain hardware, equipment, and other systems to ensure everything runs at peak performance. 

Better Security

Security personnel can ensure that no one comes into contact with any of the company’s sensitive information or data. Furthermore, experts in data centers can also help design applications and network security to help manage risks and other cyber threats. 

Colocation is becoming a more popular option for businesses of all sizes – not just giant organizations. It’s a great choice if you’re looking for a cost-effective, secure, and scalable way to host your data and applications. 

Use Cases for Colocation

Colocation is excellent for small businesses and large corporations requiring space and security for their tech infrastructure. Here are a few use cases that work well with colocation

  • Financial institutions

Financial institutions, such as banks that need an extra level of security benefit from colocation. Physical security and expert risk managers help protect clients’ personal information and the company’s assets.

  • E-commerce

Online businesses can thrive with strong connectivity without building additional infrastructure, cutting costs, and saving space. 

  •  Technology Companies

Many tech companies also use colocation to house high-powered hardware and other applications that require reliability and security. 

As the digital world expands and the need for connectivity of resources becomes more valuable, colocation will undoubtedly play a significant part in the future and evolution of data centers. 

Key Differences Between Edge Computing and Colocation

Edge computing and colocation have many key differences. Here are some of them:

Edge ComputingColocation 
Location and Proximity to End-UsersCloser to the end-usersA separate and distant area away from the end-user
Infrastructure and HardwareSmaller, more distributed data units or devices

Hardware is smaller, more efficient, and can be moved anywhere
Large and centralized data centers

More extensive and powerful hardware that can handle big operations shared by multiple users
Scalability and FlexibilityScalable as you can add resource requirements based on business needs

Flexible because it can be used to support a wide variety of applications
Also scalable since you can simply rent more or less space Also flexible because you can customize it on demand

Biggest difference is that colocation data centers can handle massive upgrades
Cost and MaintenanceTypically more expensive since it requires specialized hardware and software to process on the “edge”

Regular maintenance and updates can also be costly
It can be less expensive as multiple users can share maintenance costs

Users only pay for bandwidth and resources that they need
Best forApplications that require real-time processingApplications that require high availability and depends more on data storage than dynamic processing

What Is Edge Colocation

Simply put, edge colocation is edge computing implemented through colocation. It’s a combination of strategically located data centers and high-performance systems. Its edge data centers have eliminated the need for businesses to construct new facilities for their edge computing needs and have it handled by a third-party organization that offers colocation and edge computing services. Additionally, since the data travels a shorter distance because these data centers are located close to the end user, performance is also better and more efficient. 

Black colocation server cabinets that are edge ready

What Is an Edge Data Center

Edge data centers are smaller “colocation” facilities located closer to the network’s edge. An edge colocation data center is a type of edge data center that provides faster content delivery with minimal latency because it is located close to the population it serves. 

When choosing a data center, there are several factors you should consider aside from location, such as:


Save Thousands and Generate Millions in Revenue

For data centers, on the other hand, one way to ensure savings and smarter hardware expansion and footprint usage is to use optimization devices. One that allows your data center engineers to use all of your server rack units (RU) is through a Zero U Cable Manager

This server rack cabinet management tool allows you to replace the traditional 1RU or 2RU cable managers that use unnecessary space. For already established data centers, you can recover up to 30% of your rack units by installing a Horizontal Zero U Cable Management Shelf. That means you get to free one whole server rack cabinet for every three optimized cabinets to secure more storage, switches, and other devices without paying thousands of dollars. 

For edge colocation data centers where floor space management is paramount, Zero U cable managers are no longer a “nice-to-have” upgrade but a necessity. 

Side-by-side comparison of 1U and Zero U cable manager

Who Is Edge Colocation For

Edge colocation can be an exceptional option for companies that need high-performance applications or services for many users in a particular area or region. It can benefit organizations and industries looking to enhance their software and services’ efficiency, security, reliability, and cost-effectiveness.

Use Cases of Edge Colocation

Here are some use cases that benefit from edge colocation:

  • Telecom

As we move to 5G, there is a greater opportunity to place network function virtualization (NFV) nodes further from antennas while keeping base stations near their communities. Instead of building a bigger server in one location, they can cut costs by creating smaller servers and distributing them to different areas. 

  • Bare-metal Services 

Meta’s bare metal offerings on edge colocation allow applications and services to run on physical servers at the network’s edge at a lower cost because you can rent space or pay by the hour. Edge colocation can offer high performance, flexibility, and more control.

  • Virtual Machines (VMs) or Containers

Edge colocation’s reduced latency, better connectivity, improved security, rapid scaling, and portability can benefit high-powered VMs and containers. For example, a gaming company could use edge colocation to host its game servers closer to end users. Of course, it’s expected to result in better connectivity and performance.  

Edge colocation is expected to grow rapidly in the coming years due to the increased use of the IoT, 5G, and the demand for greater security. 

Data Center Companies

There are already a growing number of data center companies worldwide. Here are some of the leading names:

  • Digital Realty

Another leading data center and cloud solution provider, it has a global footprint that connects over 310+ data centers across 25+ countries.

  • Equinix

Equinix is another global leader in data center and colocation services for enterprise networks and cloud computing. It has 248 data centers in 27 countries on five continents. 

  • NTT Communications

NTT Communications is a global provider of cloud, managed data center services, and IT solutions. They have over 200 data centers in 70 markets across the Americas, Europe, and Asia.

These are just some of the many data center companies around. When selecting an edge data center provider, it is critical to consider your specific company’s needs and requirements.

Should Your Organization Use Edge Colocation Services

As the amount of data used and created at the edge boosts, colocation at the edge is becoming increasingly crucial. Selecting the right data center is crucial if you think edge colocation will benefit your company. You also need the right equipment and configuration to maximize efficiency and space in the data center. 

Data Center Cabinets

The 5G revolution, Edge Computing and the demand for Distributed Data requires data centers to become greater in capacity and ability. This simultaneously increases the complexity and difficulty in managing the data center infrastructure.

The amount of data centers required for processing the exponentially increasing amounts of data for streaming, Al, AR and the Internet of Things (IoT) also puts a greater demand on capital expenditures. Companies must scale upwards quickly but efficiently with an eye on both performance and economy. lT executives are given a seemingly impossible task to expand services, improve efficiencies, manage the growth and stay within an already stretched budget.

All-in-One IT Cabinet by Rakworx with text overlay showing  benefits

Modular Data Center Solutions

In addition to precisely prefabricated, modular structures and components, these high-quality Modular Data Centers efficiently utilize natural air and an evaporative cooling system to help maximize productivity from the lT infrastructure. Intelligent power distribution systems help self-monitor and regulate all activities within the structure.

Find out more about Modular Data Centers

At AnD Cable Products, we understand these challenges. We offer everything your data center needs, from Zero U Rack Solutions to every type and style of cable you need. We can customize cables for your application and offer various other hardware solutions to help your business succeed and grow. When you are ready to upgrade your cables, make moves and changes, or even deploy a new colocation or edge colocation data center or edge computing center – contact us at (800) 394-3008 or click HERE for a FREE 30-day TRIAL of our Zero U Cable Managers.

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on Leave a comment

Optimize Your Data Center for a Potential Downturn – Doing More With Less

Feature Optimize Your Data Center for a Potential Downturn - Doing More With Less - AnD Cable Management Blog

With every recession, companies make valiant attempts to reduce their spending. One of the first things to go is marketing. Then the C-Suite starts to look for other potential savings, including in the area of servers and data management. But the need to process and analyze data, access the internet, and other tasks doesn’t go away. To be competitive, data centers also need to cut costs, and find ways to do more with less. 

Sometimes this involves moves and changes that, while they cost time and even money to implement in the short term, will result in later gains in the long term. Let’s look at some strategies that can help optimize your data center.

Optimize Your Data Center for a Potential Downturn - Doing More With Less - AnD Cable Management Blog
To be competitive, data centers also need to cut costs, and find ways to do more with less. Learn strategies to optimize your data center.

Optimize Server Configurations 

One of the best ways to cut costs is to optimize your server configurations. Servers will use less floorspace, giving you room to add new servers in the same number of square feet. 

How do you do this? Well, first you start by replacing your existing Cable Management racks, with Horizontal Zero U Cable Management Racks. This unique design is used to mount ZeroU Cable Managers in the same U space as the active component, replacing conventional 1U and 2U cable managers and recovering rack space which can now be used for active devices.

When you use An D Cable’s slim 4” Vertical Cable Managers, (VCM), you can save even more space allowing you to gain floor space and move racks closer together.

This alone can result in a cost savings of between $4,000 and $9,000 per 4 system installations.

But that’s just the start of how you’ll save money. When you use AnD Cable Products’ slim 4” Vertical Cable Managers (VCM), you can save even more space, as they enable you to gain floor space by moving the racks closer together.

Below is a Whitepaper we’ve written that will take you through this step-by-step.

WHITEPAPER – Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs

Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs FREE Guide - AnD Cable Products

Smart optimization can help you increase rack space and realize significant equipment cost savings. Read our step-by-step guide that shows you how – and how much you could save.

  • How Much Rack Space You Could Save
  • How to Optimize for Maximum Efficiency
  • Savings for New and Retrofit Installations
  • Overall Cost and Space Savings Post-Optimization

Improve Airflow for Greater HVAC Efficiency

Not only do Zero U Cable Management Racks save space, but they can also help you improve server airflow no matter what your HVAC configuration. Not only are “spaghetti mess” wires ugly and potentially damaging, but they also impede airflow. This results in higher operating temperatures and reduced efficiency along with the potential damage to equipment. 

But also as rack density increases, so do challenges to HVAC systems. This is why hot and cold aisle containment, assisted by the right rack systems and better cabling solutions is essential. This is true for both hyperscale data centers and edge data centers. Whether hot or cold aisle containment is right for you will depend on your situation. 

But either way, optimizing your rack space is just the first step. Adding the right containment plan and HVAC solution can also save you a lot of money in the long run. 

Use Remote Monitoring

With modern technology, it is easy to monitor data centers remotely. This is through physical layer network security, monitoring, and control systems. Through this solution, you can not only monitor your systems, but often make control changes as well.

This eliminates temperature changes from workers entering and exiting the data center floor, saves time and money spent on on-site personnel, and can facilitate repairs by pinpointing problems and taking the guesswork out of repairs. 

A cloud server means monitoring can happen anywhere, managers and technicians can receive real time alerts, and solutions can be immediately deployed. It’s one of the best ways to do more with less. 

Go Green When Possible

Finally, there are infinite ways to go green with your data center. Not only are renewable energy sources available, but there are many ways to conserve energy. Many are listed above, like server optimization, hot and cold aisle containment, and remote monitoring. But there are also countless examples of how Amazon, Facebook, and other tech giants are “greening” their data centers

With the greater demands of AI, remote work, and increased internet speeds and 5G demands, these steps are more important than ever. Green initiatives are vital to energy and cost savings over time. 

A global recession still looms, and while it may be short lived, there are always ups and downs in any industry. Preparing for the next downturn is not just taking advantage of savings now, but it a viable way to plan for a better, more sustainable future. That sustainability impacts not only your business, but the companies you serve and the planet we all live on. 

Doing more with less isn’t just a short term solution. It’s a better way of doing business. Make a start by Contacting Us to discuss your needs.

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Optimizing Ethernet in Data Center Networks

Feature Ethernet Data Center Networks - AnD Cable Management Blog

Demand for faster data transfer, and more of it, has exploded exponentially over the last decade. Even before the pandemic, growth was already at exponential rates, but with the work from anywhere trend and more people gaming and streaming from home, demand rose even further. 

With it came an explosion in innovation, and a necessary one. Data Center Interconnects (DCI) Ethernet cable speeds increased from 100 Gb applications to 400 Gb and beyond. Server speeds have gone from 10 Gb to 25 Gb and beyond, with 100 Gb speeds on the horizon, and already in place in some data centers. 

The result is that data centers are now frequently operating like edge computing networks. Here is how it works. 

Ethernet Data Center Networks - AnD Cable Management Blog
Ethernet cable speeds have increased from 100 Gb applications to 400 Gb and beyond

Optimizing Ethernet in Data Centers

There are four factors in optimizing data center ethernet use: speed, power, reach, and latency. Speed is already being enhanced and optimized by the creation of better and more modern cable designs. But for the other areas, there is still work to be done. 

Power

When it comes to power, many data centers have gone green, with their own renewable energy sources. In most cases, they have access to all the power they need. The key is to use it in the most efficient way possible. With more power comes the issue of design, including hot and cold aisle design choices and more. 

Reach

Data center architecture must take a holistic approach, whether you are starting from scratch with a new data center or making moves and changes to update its current infrastructure. Everything from switches and routers to transceivers and overall physical design, reach must be weighed by efficiency vs. cost.

Latency

Finally, latency is related to the final user experience. When it comes to gaming or video conferencing, low latency is the expectation, while when conducting internet searches, it’s not as critical, but can still be an issue for users. As speed increases and fast becomes the norm, latency expectations change with it. 

These three areas are critical to how ethernet is used in data centers, but it is far from the only one. 

Definitive Guide to Understanding Ethernet Patch Cords in Modern Networks - AnD Cable Products Whitepaper
Ethernet cables differences, RJ45 Connectors and T586B vs T568A

Infrastructure Processing Units

How we manage this need for speed is changing on the hardware and software side of things as well. Infrastructure Processing Units (IPUs) run Software Defined Networking (SDN) programs away from the server core. This saves critical server bandwidth, but it comes with an additional load cost. 

As these advances develop, the demand for new and better ethernet cables arises. And as ethernet cables advance, IPUs hardware and software applications evolve as well. Both improve in sync with the other. It’s a developing relationship, but one data center manager’s must take advantage of. 

Edge Computing Centers 

One solution to speed is to move the data center closer to the end user. This has been a developing trend, but increasingly data centers are expanding to distributed models where the interconnections between resources drive both power and speed, creating a better overall experience for the end user, and reducing latency. 

This comes with challenges. As edge computing rapidly becomes the norm, that latency KPI gets lower and lower. Low latency is key, and specifically, DCI applications are critical to meeting new standards. Ethernet connections are a vital part of this change and growth.

The Need for Speed

What’s needed to make all of this work? The first is optical transceivers, which allow data centers to make reductions in the power they use, but enables them to increase bit rates at the same time. This allows for the increase of speed in the leaf-spine connections, a critical component in any data center, but especially those that are hyperscaling. 

This does not come without challenges, as not all ethernet cables are created equally, and interoperability can become an issue. 

To help with this, high-speed breakout cables are often used. These cables have one end that supports the aggregate rate and the other end is a series of disaggregated interfaces. With their speed comes performance challenges, especially over distances. However, there has been some rapid development in this area. 

The New Normal

As 400 Gb speeds become the norm and data centers are increasingly on the edge, there are many advantages. Distributed networks mean easier disaster recovery and backup planning and create the ability to use shared resources to meet shifting demands. 

However, this creates some challenges with testing and maintaining KPIs. Interoperability remains a key component of successful deployments. 

At AnD Cable Products, we understand these challenges. We offer everything your data center needs, from Zero U rack solutions to every type and style of cable you need. We can customize cables for your application, and offer a variety of other hardware solutions to meet your data center needs. When you are ready to upgrade your cables, make moves and changes, or even deploy a new data center or edge computing center, contact us. We’d love to be your partner in innovation

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Hot and Cold Aisle Containment in Data Centers

Feature Hot and Cold Aisle Containment in Data Centers - AnD Cable Management Blog

Data centers are often made up of hot and cold aisles, and the design of the hot / cold aisle data center is far from new. However, the traditional setup causes warm air exhaust from one aisle to flow into the air intake of the next, meaning that the overall efficiency of the data center is impacted. And really, that’s what hot and cool aisle containment is all about.

Hot and Cold Aisle Containment in Data Centers - AnD Cable Management Blog
Balancing hot and cold aisles is more important than ever to running an efficient data center

As rack density increases, especially in edge data centers and hyperscale data centers, the need for efficiency increases. This is also impacted by the fact that there are more green data centers, who may be generating their own energy using solar or other renewable resources. 

How does containment work and how does it impact your data center?

Remote Monitoring and Temperature Control

Of course, before we get to containment itself, it’s a good reminder to revisit physical layer monitoring. To know how effective any containment effort is, it’s necessary to monitor temperatures. This is most often done with temperature indicating panels, three per rack at the top, middle, and bottom, so that intake temperatures can be monitored regularly.

Of course, someone entering the area to manually check temperatures is yet another disruption to airflow, so remote monitoring as a part of physical network security is essential. This allows managers not only to monitor these temperatures, but receive alerts and take action if something goes wrong. 

A150 Remote Physical Layer Network Security Monitoring Elements
The A150 Remote Physical Layer Remote Monitoring system tracks temperature among many other elements that reduce risk and increase efficiency in data centers

But the most important fact for this discussion is to know what temperatures are so that efficiency and the effectiveness of containment can be monitored.

What is Aisle Containment?

Aisle containment is essentially isolating aisles by relative temperature. Essentially it means placing doors at the end of each aisle, and then adding panels, or barriers, from the top of the cabinet upwards. 

The more airtight this containment is, the more efficient cooling can be, and the easier it is to manage airflow. It’s pretty simple, but there are a couple of different approaches, each with its own pros and cons.

Hot vs. Cold Aisle Containment

There are two ways to manage aisle containment: hot and cold aisle containment. And they work exactly the way they sound.

  • Hot Aisle Containment: Hot aisles are contained, leaving the rest of the room at a more comfortable cool aisle temperature. It’s also easier to manage in many cases.
  • Cold Aisle Containment: Cold aisles are isolated or contained, which means the rest of the room stays at the warmer hot aisle temperature. This can also make getting the right amount of airflow tricky due to pressure changes, but managed properly it can deliver the most uniform temperature air to servers.. 

Choosing the right type of aisle containment for your data center depends on your situation, but there are some differences between new data center construction and retrofitting an existing data center.

Retrofitting vs. New Data Center Construction

In the case of a new data center, most of the time hot aisle containment is the method of choice. This is easier to set up in a new data center, as that allows you to start with the type of containment you need, and to set up HVAC systems and sensors to accommodate that. 

This creates an easier environment for technicians to work in when necessary, and is overall a more efficient choice. However, things are different when it comes to existing data centers

Existing data centers are easier to retrofit with cool aisle containment. While there is some additional monitoring, the way cooling systems work simply means this process is simpler in a currently operating system without creating expensive downtime for making moves and changes and installing containment. 

That doesn’t mean that no new data center will be built with cool aisle containment. It simply means that hot aisle containment is a more frequent choice. 

Partial Containment Solutions

When it comes to retrofitting, sometimes full aisle containment in either format is not possible. In those cases, partial containment is a solution. How is this achieved?

Often plastic strips can be used, similar to those you would go through walking into an industrial freezer or even certain restaurant kitchens. These can be hung at the end of aisles and from the tops of servers to the ceiling, just like other containment methods.

While not as effective, partial containment can be easy to retrofit and implement, and in some cases is about 75% as effective as full containment. For existing data centers looking for a quick and inexpensive efficiency solution, partial containment is a viable option. 

But containment is just a part of rack cooling solutions, and there are some new and exciting ones. 


WHITEPAPER – Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs

Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs FREE Guide - AnD Cable Products

Smart optimization can help you increase rack space and realize significant equipment cost savings. Read our step-by-step guide that shows you how – and how much you could save.

  • How Much Rack Space You Could Save
  • How to Optimize for Maximum Efficiency
  • Savings for New and Retrofit Installations
  • Overall Cost and Space Savings Post-Optimization

The Addition of Liquid Cooling

Data center cooling has evolved from older, inefficient systems to more contemporary ones in a relatively short period of time. However, one thing that has been around for a while but is experiencing a boom in denser, modern data centers is liquid cooling. 

Why? Well, in most cases liquid cooling is more efficient than air cooling in data centers, and when the two are used in conjunction, generally the best results can be achieved. The larger data centers get, the more power they consume, the greater the push towards a blended approach to cooling that not only saves power and is better for the environment, but prolongs the life of equipment and saves space as well. 

But even with the addition of liquid cooling, it’s all about efficient use of rack space and the airflow around them. 

It’s All About Airflow

No matter what kind of aisle containment is used, and no matter how efficient the cooling system, saving space, improving efficiency, and keeping things organized, maximizing rack space efficiency and airflow is vital.

That’s why data centers choose ZeroU racks and cable management systems. They not only help avoid the spaghetti mess and all the cable issues that can arise from it, but also help maximize airflow and save significant rack space in any system.

Whether you are retrofitting a data center or engaged in new construction, we have the rack system that’s right for you. 

Contact AnD Cable Products today for all of your cable, rack, and physical network security needs. We’d love to start a conversation about the right solution for you. 

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on Leave a comment

The Data Link Layer – How DAC and AOC Cables Can Work For You

Feature - The Data Link Layer - How DAC and AOC Cables Can Work For You - Cable Management Blog

As the need for data storage and speed increases, the need for hyperscale data centers has increased. So has the need for edge data centers as well. While large-scale centers serve companies like Amazon, Microsoft, and Google, other organizations are looking at smaller data centers closer to the end-user. In both cases, the data link layer of the data center is critical. Enter Direct Attach Copper (DACs) cables and Active Optical Cables (AOCs).

The Data Link Layer - How DAC and AOC Cables Can Work For You - Cable Management Blog
The data link layer of the data center is critical to ensuring your resources and used to their full potential

What is that data link layer? It’s the physical layer, the connection between servers that ensures all the computing resources are used to their full potential. The speed and integrity of these connectors can make a huge difference. 

They include Direct Attach Copper (DACs) cables, Active Optical Cables (AOCs), and fiber optic cable assemblies connected into transceivers throughout the data center. How does each one work, and why are they so critical to installation, maintenance, and deployment?

The Need for Speed

There are two aspects to the need for speed: the need for speed in shorter cables between servers, and the need for speed over longer distances. Different kinds of cables work differently in each instance. 

For example, DACs are most often used over short distances, connecting units in the same server rack. They can be active or passive – active connections are part of signal processing circuitry, and passive connections simply carry power. In the case of a DAC, the cable is made of copper rather than fiber. 


WHITEPAPER – Understanding Stranded and Solid Conductor Wiring in Modern Networks

Understanding Stranded and Solid Conductor Wiring in Modern Networks - AnD Cable Products Whitepaper

An overview of the differences between stranded and solid conductor wiring, the properties of each and the best cable type to use in a variety of typical settings.

  • Types of Stranded and Solid Conductor Wiring
  • Comparison of Electrical Properties
  • Factors Impacting Attenuation / Insertion Loss
  • Choosing the Right Cable


AOCs usually connect devices within the same row, but they cover longer distances than their copper cousins. However, they do not work in End of Row (EOR) or Middle of Row (MOR) configurations where certain types of patch panels are used. They are usually provided in fixed lengths from a few meters long to more than 100 meters. AOCs are active and include transceivers, control chips, and modules.

Both are fast, similar in speed to optic fiber cables, but that speed can be compromised by cable damage or in the case of DACs, electromagnetic interference. Both must be tested with a tool that can accept dual SFP/QSFP transceivers and generate and analyze traffic.

So how do you test them? Well, there are methods that include automation, but there are other factors to consider. 

Automation Matters

 Speed drives us to DACs and AOCs in some cases, but they can become damaged in a variety of ways. This often doesn’t even happen in the installation process, but in the shipping and handling before they even arrive at the data center. Sometimes it happens if they are stored and moved frequently. 

So the first place to test them is before installation. This ensures they are working before they are put into service. It’s easy to see how testing all cables at installation can be costly and time-consuming but not testing early can be costly later on. 

The solution is rapid, automated testing that can be done by running a test pattern where the results can be compared to a Bit Error Rate (BER) threshold. DAC and AOC cables including breakouts usually have a BER rating on their datasheets, especially when they are meant to be used with devices implementing the RS-FEC algorithm.

The tests only take a minute per cable and result in reports including a cable identifier, such as the serial number, identifying clearly any faulty equipment. 

Proper Power Planning

What’s the other advantage of DACs and AOCs? Energy savings. Point to point high-speed cables take less power and can save money, especially at scale. While DACs offer more dramatic numbers per cable, AOCs offer savings as well when multiple transceivers are replaced by cables. 

They’re not ideal for every case in every data center, but where they can be used as a key part of deployment, they can provide significant energy savings.

Living on the Edge Deployment

The other argument for DAC and AOC deployment and testing at installation exists on the edge. More Edge deployments force centers to increase speed, security, and efficiency at the same time as they minimize latency.

Opting to wait and address any connectivity issues during troubleshooting results in costly mistakes and skipping troubleshooting steps in favor of speedy repairs, sometimes those that are not necessary. Not only is this costly – cables can vary from tens of dollars to thousands but it can also lead to confusing labels and the increased probability of unplugging a live cable.

The fact that DACs and AOCs can be tested so quickly and easily at the time of installation is another great argument for their use in the data link layer. But no matter what cable configuration your data center uses, from point to point high-speed cables to other fiber and optical options, the management of that data link layer is critical to smooth data center operations.

Looking for High Speed Cables?

WD 25G SFP28 SFP+ DAC Cable - 25GBASE-CR, SFP28 to SFP28 Passive Direct Attach Copper, Twinax Cable

Ready to start optimizing your data link layer? Have questions about what cables might be right for you and your application? Whether you are deploying a brand new data center or making moves and changes, we’re here to help. Contact AnD Cable Products today for more information. We’re here to help every step of the way. 

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Faster Polymer Plastic Cables? Not So Fast!

Faster Polymer Plastic Cables? Not So Fast - AnD Cable Management Blog

Just about a year ago a group from MIT demonstrated a polymer plastic cable the size of a human hair that could transmit data faster than copper – much faster. 

How fast? Well, they recorded speeds of more than 100 gigabits per second! So where is this new technology and where is it headed? Well, here are some answers for you.

Faster Polymer Plastic Cables? Not So Fast - AnD Cable Management Blog
MIT demonstrated a plastic polymer cable the size of a human hair. Photo: MIT, https://news.mit.edu/2021/data-transfer-system-silicon-0224

The Need for Speed

First, perhaps we need to qualify what this speed is, and why computers and data centers need it. 

The first big deal is that these cables act like copper – they can directly connect devices without the need to reformat data. While standard fiber cables are faster, they require a converter to change light signals to electrical signals at each end of the connection. 

Of course, there are a lot of immediate uses for faster cables like these, including in data centers. Artificial intelligence applications like self-driving cars, manufacturing, and countless other applications where data provided as close to “real-time” as possible makes a huge difference. 

But of course, as with all such applications, speed is not the only factor.

Distance

At the moment in a laboratory setting, these cables are only good for short distances, not long ones. That doesn’t mean researchers are not confident in the impact these cables can have. 

Think of a polymer plastic cable that is both durable and lightweight, and can transmit terabits of data over a meter or beyond? Theoretically, this is the possibility, with the idea that such cables could replace USB and even the faster USB-C cables. 

Even at shorter lengths, such cables could be exceptionally useful for transferring data between more than one chip inside a device. The thinner fibers could be used to revolutionize these applications as well, making even smaller and more efficient devices possible. 

We Have the Power

The problem as it currently exists is that transferring data through copper cables consumes more and more power, to the point of diminishing returns, and such transfer generates heat – a lot of heat that must be dissipated and can actually cause damage to cables. 

The fiber optic alternative is not always compatible with silicon chips without the light to electronic transfer mentioned above. The idea behind polymer plastic is to save energy, generate less heat, and still allow for compact connections. 

If this is such a great idea, why is it not on the market yet?

From Laboratory To Market

To transfer such technology from the lab to the market takes a lot of work and requires some potential changes. First, the technology needs to be tested and perfected at a higher level. Since the concept has been established, other labs are now working on it as well, and this could be the fastest part of the process. 

But there is more:

  • New standards would have to be developed for IEEE, established, and agreed upon
  • Potentially, new connectors would need to be created for these cables to interface with other chips and other devices
  • The manufacture of new cables needs to be established at scale before they can become commonly used in any application.
  • A supply chain or the use of existing ones must be established to get cables from the plant to the end-user.

Does this sound like a lot? It is, but it has been done before. The question is, what do those who are building data centers – and would use these cables on a regular basis – think?

The Future is Now

“The need for speed has never been so great,” Bill Lambert, a data center engineer told us. “Ten years ago, no one would even have been talking about devices that would need this kind of speed. We would have told you we would never need that capacity.”

And he’s right. Many of the devices we now use every day, and their speeds would have been unimaginable before, let alone the amount of data we use. But the more we look at the uses for real-time data, the faster we need to get that information from one place to another. 

“It’s like the work from anywhere revolution,” he told us. “The last two years have totally changed what data transfer and speed look like, inside and outside of data centers. It’s a sure bet that the next few will revolutionize these ideas again.”

In an ever-changing field where speed and data matter more than ever, science has just begun to catch up with what we need. And we’re lucky enough to be a part of it. 

Have a question about updating the infrastructure in your current data center or want to learn more about building the infrastructure in a new one? Contact us here at AnD Cable Products. We have everything from the cable management you need to remote monitoring and more. 

We’re glad to be your partners going forward to tomorrow and beyond. 

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 3 Comments

How a Fire-Rated Power Distribution System Reduces Risk

How a Fire-Rated Power Distribution System Reduces Risk - AnD Cable Management Blog

Fires are not super common in data centers, but they do happen, and most often when they do, they are not reported (at least not in the news). Much of the reason for this is that fires are usually small and quickly contained. It is unusual for a data center to become fully engulfed. 

Even when such fires are reported on, details can be sketchy, causes, and investigations hidden behind NDA’s and are therefore difficult to learn from. While companies want to retain control over the narrative and how it impacts their reputation, the information around fires can and should be shared within the industry to prevent further similar events. And there are some things you can do now – such as remote monitoring – to keep your staff and facilities safe. 

Remote monitoring can help your data center keep staff and equipment safe from fire damage

The OVHCloud Incident

On 10 March 2021, near midnight local time, a fire started in the OVHCloud SBG2 data center, quickly got out of control, and even damaged two other nearby data centers. The fire started near two UPS units, one of which was worked on that same day. 

The company is considered a European alternative to the giant US cloud operators and is a key participant in the European Union’s GaiaX cloud project. Data centers serve some key functions in the French government, the UK vehicle licensing department, and others. Operations were directly impacted by the fire, although the company did have backup data centers, and quickly restored service to most customers. 

But poor design and operational practices that seem to sacrifice dependability for innovation have caused some issues, including major outages, for OVHCloud. The fire just punctuated an ongoing issue but also caused many data center operators and customers to pause and think about something probably not mentioned often enough: the risk of fire in data centers. 

What are the Fire Risks?

When broken down there are a few key fire risks common to all data centers, and most of the time they are relatively easy to mitigate.

  • Electrical Equipment – temperature changes can increase this risk, and of course, a source of risk is also backup power equipment. Generator rooms that contain gas or diesel fumes can create intense fires quickly that would be hard to fight.  
  • Cables – data center power cables are usually not enough to start a fire by themselves, but a damaged cable can release sparks or overheat and cause a small fire or thermal incident that can then spread. Proper cable management and monitoring of underfloor and overhead cabling can help prevent these events. 
  • HVAC Infrastructure – heating and cooling units present some fire danger to data centers and should be inspected often and monitored carefully. Its operation is also critical to maintaining optimal temperatures in the data center to prevent other thermal events. 
  • External Fire Sources – California wildfires. The recent blaze in Boulder. The Texas fires last year. All are examples of external fire risk to data centers, specifically those Edge data centers in less populated areas. 

Most of these can be controlled by properly managing the data center, but there are some events that can only be prepared for. Having fire suppression systems and plans in place is critical regardless of the likelihood of the danger. 

Fire Prevention Systems

Of course, the best prescription for dealing with fire is prevention. The key to this in the modern data center environment is a complete remote monitoring system. The A150 Network Monitoring System is designed specifically for data centers, IT rooms, and confidential lab operators with virtual graphics showing temperature, rack power consumption, and humidity. 

But most importantly for this topic, the system provides alerts for mission critical events like the sudden temperature changes associated with fires, smoke alarms, and sprinkler activation alerts. You can also be alerted to things like power spikes, a rise in server temperatures, or even UPS unit failures so you can make emergency repairs and mitigate fire risk before one starts. 

The reality is that anything you can do to prevent fire before it happens is preferrable than anything you can do to suppress and extinguish an active blaze. However, those are contingencies you need to prepare for. 

Fire Rated Power Distribution Systems

There are two primary principles when it comes to any fire safety plan, anywhere. They are the two P’s: prevent (which we discussed above) and protect. Part of both of these is the vital role of uninterrupted power. Enter the role of a fire-rated busbar trunking system.

These systems can be operational for a period of up to two or even three hours depending on their ratings. They’re also cased in a fire-retardant self-extinguishing resin that essentially protects the power supply itself. The idea is that this will give first responders time to extinguish the fire before it can spread.

How do you choose the right one for your data center? Well, there are established guidelines that indicate the type of fire, the duration they were tested for, how they endured water spray, such as that from sprinkler systems, and the power supply integrity in a fire situation.

Technically, they look like this: 

  • BS IEC 60331-1: 2019 – Tests for electric cables under fire conditions; circuit integrity
  • BS 8602:2013 – Method for assessment of fire integrity of cast resin busbar trunking systems for the safety-critical power distribution to life safety and firefighting systems
  • BS 6387:2013 (CWZ Protocol) – Test method for resistance to fire of cables required to maintain circuit integrity under fire conditions. Fire-resistant cables are classified by a sequence of symbols (for example, CWZ) in accordance with the fire resistance criteria they meet, the selected test temperature, and the length of the fire resistance test per BS 6387
  • NFPA 75 – Standard for the fire protection of IT equipment
  • ISO 834 – Fire resistance tests- elements of building construction
  • ATEX & IECEx – ATEX certification is given to equipment that has gone through rigorous testing outlined by European Union directives and proved safe to use in specific environments with explosive atmospheres, according to the zone/s they are certified to be used in.

The most important part of this discussion is the planning stage. It’s vital to have a disaster plan in place and address both prevention and keeping a fire from happening in the first place to protect the data center and minimize the fire’s impact. 

The more we learn from data center fires, the more likely we are to be able to prevent them going forward, and mitigate the damage in the rare event they do occur. 

Need some advice on cable management, remote monitoring, or other aspects of data center planning? Contact us – we’d love to start a conversation about how we can help you with your data center management plan. 

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Extreme Ultraviolet (EUV) Lithography – Keeping Moore’s Law Alive

Extreme Ultraviolet (EUV) Lithography - Keeping Moore’s Law Alive - Feature

In 1975, looking at the next decade, a guy named Gordon Moore revised his previous forecast of the number of integrated circuits in a microchip doubling every year to doubling every two years. Moore was not a prophet, nor a brilliant data analyst, but as his prediction held true, it later became known as a law. 

The law has become more of a guide, influencing the policies for research and development of the largest companies and chip manufacturers in the world. And it, and a new machine helping to keep Moore’s law alive, are what your iPhone and those robots from Boston Dynamics with the best dance moves have in common.

Macro photo/Shutterstock.com

Let There Be Light

First, we must understand lithography, an analogous method for making printed circuits. Technically defined, lithography is printing on a plane surface treated to repel the material being printed except where it is intended (or in the case of circuits, needed) to stick. 

The use of light for this treating and etching process is common, but one machine, built by ASML, a Dutch company that has cornered the market for etching the tiniest nanoscopic features into microchips with light, is playing a huge role in keeping Moore’s law viable. 

ASML introduced the first extreme ultraviolet (EUV) lithography machines for mass production in 2017, after decades spent mastering the technique, and the machine needed for the process is to put it mildly, massive and mind blowing. It’s expensive too, with a sticker price of around $150 million. TSMC, Samsung, and Intell are initial customers. 

Amazon Prime won’t be enough to get the massive machine delivered, unless you have 40 freight containers, three cargo planes, and 20 trucks on standby. What’s the big deal with this machine, and why does it (and it’s future children) matter?

How it Works

Think of a machine the size of a bus with 2 kilometers of cabling and over 100,000 parts. Inside are a series of nano-mirrors polished to precision that literally project extremely focused ultraviolet light into future chips to etch features that are often just a few atoms wide. That’s right, atoms. 

This means chips with components smaller (and more durable in many ways) than they have ever been. Smaller chips that are just as powerful, nano-sensors that are just as sensitive or accurate in a fraction of the space they take up now, and more will enable chips to get tinier, lighter, and more powerful than ever before. 

The Moore’s Law Limit

How small can chips get? Some think that Moore’s law is reaching the point where it is no longer viable, for three key reasons

  • Electrical leakage – As transistors get smaller, they at first become more efficient, but as they have reached nano-size, the transistor often can’t handle all of the electricity, and that means heat, and heat means potential damage to the transistor and maybe even the entire chip in some circumstances. Therefore, we can only decrease the size of a chip as we increase cooling power.
  • Heat – The electrical leakage and resulting heat means that one of two things must be limited: the amount of voltage or the number of the transistors in a given chip, thus limiting the power. The technology of Extreme Ultroviolet Lithography may offer some help in this area, but that remains unknown.
  • Economics – The price of this machine is just one factor. As chips get hotter and need more cooling the cost of keeping a data center at a viable temperature goes up, and that cost must be passed on to someone, generally the consumer. And businesses also want to extend the life of their equipment, ensuring it lasts as long as possible. Faster equipment with a shorter lifespan may not be as appealing to the average buyer or data center manager.

What does all this mean when we break it down?

Well, the data center of tomorrow may be a fraction of the size of those we have today. Or it may be equally as large, but able to store and deliver data at rates we can’t even imagine. Equipment, servers, remote sensors, everything may keep shrinking, to a point. But there will be a point when Moore’s law will no longer be valid or achievable, and that day may come sooner rather than later.

Are you running the data center of today, but looking forward to the data center of tomorrow? Are you interested in the latest remote monitoring and cabling solutions? Contact us at AnD Cable Products. We’d love to talk about what tomorrow looks like, and how we can help you head the right direction today. 

WHITEPAPER – Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs

Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs FREE Guide - AnD Cable Products

Smart optimization can help you increase rack space and realize significant equipment cost savings. Read our step-by-step guide that shows you how – and how much you could save.

  • How Much Rack Space You Could Save
  • How to Optimize for Maximum Efficiency
  • Savings for New and Retrofit Installations
  • Overall Cost and Space Savings Post-Optimization

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Edge Data Centers – Space and the Final Frontier

Edge of space - Edge Data Centers - Space and the Final Frontier - Cable Management Blog

Computing on the edge: it seems that everyone is doing it, from big industry to manufacturers, from ISPs to Cloud Computing centers. When you can locate computing and analytics power closer to machines connected via the IoT and other data sources, the faster you can gather data, and the more data you can store and analyze in a timely manner. For some, edge data centers seem like the final frontier for data.

Feature Edge Data Centers - Space and the Final Frontier - Cable Management Blog

This has resulted in data centers that vary in size, from the size of a very large cabinet to those contained in the space of a small shipping container. But like any journey to the edge, there are challenges and risks. There are two primary ones we will address here:

  • Temperature – Because of the small spaces edge data centers often occupy, airflow and temperature control can be tricky.
  • Space – Smaller size also means that saving space is critical, and on the flip side, can also enable more airflow and indirect cooling in a confined area.

In this way, the two primary challenges are related, and often a solution that mitigates one will also help mitigate the other. Let’s take a quick look at each of these.

Controlling the Environment on the Edge

The temperatures that edge data centers operate at are critical. And there is a huge difference between the cooling we need for a building designed to keep people comfortable, and a building designed to serve machines. Think of it this way: if someone opens the door to your office, you may feel a blast of warm or cold air, depending on the time of year. Your discomfort disappears quickly when the door closes, as the HVAC system takes over, and brings air back into the broad temperature tolerances humans can endure.

However, what happens when you go to an edge data center and open the door? The answer is, it depends on where it is. Large, brick and mortar data centers can be located in areas with minimal environmental challenges and low risk of natural disasters. But edge data centers must be located, well, where they are needed. That means in dusty and dirty environments, areas with extreme temperature fluctuations, and more.

There are really only two choices:

  • Develop and deploy equipment designed to withstand extremes, at a higher price point. A good example is cellular equipment like that developed by AT&T. However, the cost of this equipment is too high for standard edge data center deployment at scale.
  • Work with existing, readily available equipment and use unique strategies to combat environmental changes at a small scale, including using tents or shrouds for entry and exit, using handheld temperature and humidity monitors to evaluate current conditions, and developing strategic plans for unexpected events.

Another part of the solution is to use remote monitoring, AI and the IoT in edge data centers to mitigate the need for human intervention. Monitoring the health of equipment and preventing disaster in the first place is one of the keys to efficient management of edge data centers.

This is but one of the challenges data center managers face. The second is the efficient use of available space.

Saving Space

While cooling and environmental control are critical, so is the efficient use of space. This can result in increased airflow and easier HVAC solutions while also enabling more servers to be installed in the same amount of space.

This involves a few key steps:

  • Rack Selection – Whether a data center uses 23” or 19” racks, there are rack solutions that take up less space, and are also able to use better rack management options.
  • Cable ManagementZeroU horizontal cable managers makes more room for servers in a single rack, and they prevent the “spaghetti mess” that can happen in server racks, and be especially problematic in edge data centers that are more compact.
  • Compact Vertical Cable Management11U cable managers also save space and keep cables organized and easy to access should moves, changes, or repairs be needed.

Anything that can be done to save space in an edge data center makes facing the other challenges related to environmental control easier, but it also has another impact: an economic one. The less space you need to get the computing power you need, the more compact your data center can be. Alternatively, this can give you space to scale as needed without creating yet another data center space.

At the edge, there are always challenges, but there are also solutions. From controlling the environment in and around the data center to using the space in the most efficient way possible, with the right equipment, these obstacles can be transformed into opportunities to change not only how much data is collected and how quickly it can be acted upon, but where it happens as well.

Do you have questions about saving space in your edge data center? Are you looking for remote monitoring solutions? Then contact us here at AnD Cable. We’d love to start a conversation about how we can help you.

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/