Posted on 1 Comment

Edge Computing – A Contrast to Colocation

Featured image of edge computing server cabinets

Edge computing is an innovative strategy that moves data storage and processing closer to users and data sources. On the contrary, colocation utilizes a third party’s centralized area and data to share resources and space with our clients. Although it may appear that location differentiates the two, there are still many other distinctions that make them suitable for different uses and needs. 

In 2025, the world’s data creation is forecast to hit a new record high of over 180 zettabytes. Of course, this will inevitably increase the demand for low-latency and high-bandwidth applications. As a result, it paved the way for new and improved data processing paradigms like edge colocation. As the name implies, it combines the best of edge computing and colocation to address the drawbacks of both and provide a better and more convenient solution to customer needs. 

Rows of server cabinets used for Edge Colocation

What Is Edge Computing

Edge computing is a method that places data processing and storage at the network’s “edge,” where it’s closer to both resources and users. Since discovering the “edge,” edge computing has become a vital modern technology. After years of relying on huge rooms as centralized data centers, edge computing decentralized the processing across multiple edge nodes or devices to create local networks and servers. Since then, it has provided numerous functions and solutions to a wide range of users. 

Because it reduces the distance between data sources and users, edge computing has a faster response time, less bandwidth consumption, better security, and many other benefits. Since it processes data on-site, it’s highly reliable, provides real-time data, works efficiently on on-demand applications, and more.

Read Specific Use Cases for Edge Computing

Advantages of Edge Computing

Edge computing offers several benefits that make it an attractive and valuable approach in today’s digital landscape:

Reduced Latency

Edge computing reduces data transmission latency to centralized data centers by processing data closer to the source or end-users. This reduction in latency is critical for applications that require real-time data processing, such as autonomous vehicles, industrial automation, and immersive virtual reality experiences.

Improved Performance

Because data processing occurs locally, performance and response times improve, enhancing the overall user experience and allowing time-sensitive applications to function smoothly.

Bandwidth Optimization

By processing and filtering data at the edge, edge computing helps optimize bandwidth usage. The central cloud or data center receives only relevant or summarized data, reducing network traffic. As a result, it saves bandwidth and minimizes the costs associated with data transmission.

Enhanced Reliability

Edge computing improves reliability by reducing reliance on centralized data centers. This function guarantees the continuity of data processing during connection or network failures. Hence, edge computing is particularly essential for mission-critical applications that cannot afford any downtime.

Scalability and Flexibility

Edge computing makes it possible to scale applications efficiently as demand changes. This means that services and applications can be changed at the network edge without requiring significant infrastructure changes for the company.

These advantages of edge computing make it a compelling solution for various applications and industries. In today’s data-driven, interconnected world, edge computing can open new doors, boost efficiency, and improve user experiences.

Person looking at graph with edge computing text overlay

Use Cases for Edge Computing

Edge computing has been beneficial to many use cases, such as the following:

  • Autonomous or Self-driving Cars

Edge computing allows real-time data processing from the vehicle’s sensors. As a result, it enables cars to process information quickly, allowing them to avoid obstacles, make decisions, and navigate autonomously. 

  • Healthcare

It allows accurate data collection and processing from medical devices in real time. Additionally, it’s essential for medical devices that must monitor a patient continuously and aren’t reliant on network connectivity. Furthermore, edge computing can improve healthcare services in rural and remote areas by allowing faster access to patient information, diagnosis, and treatment. 

  • Manufacturing and Industrial

Edge computing can also improve efficiency and productivity in factories and industrial settings. It monitors operations, controls equipment and machines, and performs other real-time tasks. It’s also useful for energy efficiency monitoring, predictive maintenance, and more. 

  • Retail

It is also helpful in processing retail sensors and other applications, allowing faster and more accurate inventory management, better customer service, and even loss or fraud detection. 

  • Human Resources (HR)

Edge computing offers numerous advantageous use cases for Human Resources (HR) departments across various industries. One prominent use case is the integration of edge devices and sensors in the workplace to gather real-time data on employee attendance, well-being, and safety.

Edge computing also makes security more robust for organizations, reducing the amount of data transmitted and processed in the cloud. That means sensitive data are less vulnerable to attacks. HR departments from established companies can deploy more secure tools exclusive to the organization for employee queries, performance, and requests. 

  • Universities

Edge computing enhances a university’s capabilities in processing and analyzing large volumes of data significantly faster. For academic researchers and doctoral students, this means more frequent breakthroughs and innovation.

Edge computing also enhances Internet of Things (IoT) capabilities. By installing servers closer to devices, they can perform better. End users will experience reduced latency, while universities will benefit from less bandwidth consumption.  

Edge computing has become a significant part of many businesses and industries by processing data from sensors, cameras, machines, smart devices, etc. 

What Underlying Concept Is Edge Computing Based On

Edge computing is based on the concept of distributed computing. The idea is that instead of a centralized data center or central cloud, it distributes data processing and storage across multiple devices. Edge computing processes data closer to the “edge,” where the users and sources are. Since it’s not reliant on a central cloud for data processing, it reduces the number of “hops” the data must travel. As a result, it saves on bandwidth, makes real-time responses, performs better, and can function independently even with a poor network connection.

What Is Colocation

Colocation is the method of renting a space from a third-party colocation data center facility. It gives you access to the facility’s resources, infrastructure, and services other renters share. Colocation can be a more cost-effective and secure option than building and maintaining your data center. 

What Is a Colocation Data Center

Colocation data centers are huge facilities that house servers and resources many users share. These centers offer physical security, hardware maintenance, storage, servers, and other efficiency resources. Typically, space is rented per rack, room, cabinet, or area unit. Many companies and businesses prefer colocation, particularly if they need space to house the equipment and wish to avoid the hassle of maintaining network servers and infrastructure. 

Advantages of Colocation

Colocation has several advantages that make it ideal for many companies, such as:

Space and Lower Expenditure Costs

Of course, the most appealing colocation assets are space and cost savings. Whether you’re a startup, a small business, or a large corporation, space is valuable. Colocation provides space and security, power systems, cooling, etc., so you can save on overhead expenses.

Scalability and Flexibility

Because you can easily rent more space and add more applications, scaling as your business expands is also convenient.

Skilled Staff and Maintenance

Experts and personnel in data centers can help monitor and maintain hardware, equipment, and other systems to ensure everything runs at peak performance. 

Better Security

Security personnel can ensure that no one comes into contact with any of the company’s sensitive information or data. Furthermore, experts in data centers can also help design applications and network security to help manage risks and other cyber threats. 

Colocation is becoming a more popular option for businesses of all sizes – not just giant organizations. It’s a great choice if you’re looking for a cost-effective, secure, and scalable way to host your data and applications. 

Use Cases for Colocation

Colocation is excellent for small businesses and large corporations requiring space and security for their tech infrastructure. Here are a few use cases that work well with colocation

  • Financial institutions

Financial institutions, such as banks that need an extra level of security benefit from colocation. Physical security and expert risk managers help protect clients’ personal information and the company’s assets.

  • E-commerce

Online businesses can thrive with strong connectivity without building additional infrastructure, cutting costs, and saving space. 

  •  Technology Companies

Many tech companies also use colocation to house high-powered hardware and other applications that require reliability and security. 

As the digital world expands and the need for connectivity of resources becomes more valuable, colocation will undoubtedly play a significant part in the future and evolution of data centers. 

Key Differences Between Edge Computing and Colocation

Edge computing and colocation have many key differences. Here are some of them:

Edge ComputingColocation 
Location and Proximity to End-UsersCloser to the end-usersA separate and distant area away from the end-user
Infrastructure and HardwareSmaller, more distributed data units or devices

Hardware is smaller, more efficient, and can be moved anywhere
Large and centralized data centers

More extensive and powerful hardware that can handle big operations shared by multiple users
Scalability and FlexibilityScalable as you can add resource requirements based on business needs

Flexible because it can be used to support a wide variety of applications
Also scalable since you can simply rent more or less space Also flexible because you can customize it on demand

Biggest difference is that colocation data centers can handle massive upgrades
Cost and MaintenanceTypically more expensive since it requires specialized hardware and software to process on the “edge”

Regular maintenance and updates can also be costly
It can be less expensive as multiple users can share maintenance costs

Users only pay for bandwidth and resources that they need
Best forApplications that require real-time processingApplications that require high availability and depends more on data storage than dynamic processing

What Is Edge Colocation

Simply put, edge colocation is edge computing implemented through colocation. It’s a combination of strategically located data centers and high-performance systems. Its edge data centers have eliminated the need for businesses to construct new facilities for their edge computing needs and have it handled by a third-party organization that offers colocation and edge computing services. Additionally, since the data travels a shorter distance because these data centers are located close to the end user, performance is also better and more efficient. 

Black colocation server cabinets that are edge ready

What Is an Edge Data Center

Edge data centers are smaller “colocation” facilities located closer to the network’s edge. An edge colocation data center is a type of edge data center that provides faster content delivery with minimal latency because it is located close to the population it serves. 

When choosing a data center, there are several factors you should consider aside from location, such as:


Save Thousands and Generate Millions in Revenue

For data centers, on the other hand, one way to ensure savings and smarter hardware expansion and footprint usage is to use optimization devices. One that allows your data center engineers to use all of your server rack units (RU) is through a Zero U Cable Manager

This server rack cabinet management tool allows you to replace the traditional 1RU or 2RU cable managers that use unnecessary space. For already established data centers, you can recover up to 30% of your rack units by installing a Horizontal Zero U Cable Management Shelf. That means you get to free one whole server rack cabinet for every three optimized cabinets to secure more storage, switches, and other devices without paying thousands of dollars. 

For edge colocation data centers where floor space management is paramount, Zero U cable managers are no longer a “nice-to-have” upgrade but a necessity. 

Side-by-side comparison of 1U and Zero U cable manager

Who Is Edge Colocation For

Edge colocation can be an exceptional option for companies that need high-performance applications or services for many users in a particular area or region. It can benefit organizations and industries looking to enhance their software and services’ efficiency, security, reliability, and cost-effectiveness.

Use Cases of Edge Colocation

Here are some use cases that benefit from edge colocation:

  • Telecom

As we move to 5G, there is a greater opportunity to place network function virtualization (NFV) nodes further from antennas while keeping base stations near their communities. Instead of building a bigger server in one location, they can cut costs by creating smaller servers and distributing them to different areas. 

  • Bare-metal Services 

Meta’s bare metal offerings on edge colocation allow applications and services to run on physical servers at the network’s edge at a lower cost because you can rent space or pay by the hour. Edge colocation can offer high performance, flexibility, and more control.

  • Virtual Machines (VMs) or Containers

Edge colocation’s reduced latency, better connectivity, improved security, rapid scaling, and portability can benefit high-powered VMs and containers. For example, a gaming company could use edge colocation to host its game servers closer to end users. Of course, it’s expected to result in better connectivity and performance.  

Edge colocation is expected to grow rapidly in the coming years due to the increased use of the IoT, 5G, and the demand for greater security. 

Data Center Companies

There are already a growing number of data center companies worldwide. Here are some of the leading names:

  • Digital Realty

Another leading data center and cloud solution provider, it has a global footprint that connects over 310+ data centers across 25+ countries.

  • Equinix

Equinix is another global leader in data center and colocation services for enterprise networks and cloud computing. It has 248 data centers in 27 countries on five continents. 

  • NTT Communications

NTT Communications is a global provider of cloud, managed data center services, and IT solutions. They have over 200 data centers in 70 markets across the Americas, Europe, and Asia.

These are just some of the many data center companies around. When selecting an edge data center provider, it is critical to consider your specific company’s needs and requirements.

Should Your Organization Use Edge Colocation Services

As the amount of data used and created at the edge boosts, colocation at the edge is becoming increasingly crucial. Selecting the right data center is crucial if you think edge colocation will benefit your company. You also need the right equipment and configuration to maximize efficiency and space in the data center. 

Data Center Cabinets

The 5G revolution, Edge Computing and the demand for Distributed Data requires data centers to become greater in capacity and ability. This simultaneously increases the complexity and difficulty in managing the data center infrastructure.

The amount of data centers required for processing the exponentially increasing amounts of data for streaming, Al, AR and the Internet of Things (IoT) also puts a greater demand on capital expenditures. Companies must scale upwards quickly but efficiently with an eye on both performance and economy. lT executives are given a seemingly impossible task to expand services, improve efficiencies, manage the growth and stay within an already stretched budget.

All-in-One IT Cabinet by Rakworx with text overlay showing  benefits

Modular Data Center Solutions

In addition to precisely prefabricated, modular structures and components, these high-quality Modular Data Centers efficiently utilize natural air and an evaporative cooling system to help maximize productivity from the lT infrastructure. Intelligent power distribution systems help self-monitor and regulate all activities within the structure.

Find out more about Modular Data Centers

At AnD Cable Products, we understand these challenges. We offer everything your data center needs, from Zero U Rack Solutions to every type and style of cable you need. We can customize cables for your application and offer various other hardware solutions to help your business succeed and grow. When you are ready to upgrade your cables, make moves and changes, or even deploy a new colocation or edge colocation data center or edge computing center – contact us at (800) 394-3008 or click HERE for a FREE 30-day TRIAL of our Zero U Cable Managers.

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Faster Polymer Plastic Cables? Not So Fast!

Faster Polymer Plastic Cables? Not So Fast - AnD Cable Management Blog

Just about a year ago a group from MIT demonstrated a polymer plastic cable the size of a human hair that could transmit data faster than copper – much faster. 

How fast? Well, they recorded speeds of more than 100 gigabits per second! So where is this new technology and where is it headed? Well, here are some answers for you.

Faster Polymer Plastic Cables? Not So Fast - AnD Cable Management Blog
MIT demonstrated a plastic polymer cable the size of a human hair. Photo: MIT, https://news.mit.edu/2021/data-transfer-system-silicon-0224

The Need for Speed

First, perhaps we need to qualify what this speed is, and why computers and data centers need it. 

The first big deal is that these cables act like copper – they can directly connect devices without the need to reformat data. While standard fiber cables are faster, they require a converter to change light signals to electrical signals at each end of the connection. 

Of course, there are a lot of immediate uses for faster cables like these, including in data centers. Artificial intelligence applications like self-driving cars, manufacturing, and countless other applications where data provided as close to “real-time” as possible makes a huge difference. 

But of course, as with all such applications, speed is not the only factor.

Distance

At the moment in a laboratory setting, these cables are only good for short distances, not long ones. That doesn’t mean researchers are not confident in the impact these cables can have. 

Think of a polymer plastic cable that is both durable and lightweight, and can transmit terabits of data over a meter or beyond? Theoretically, this is the possibility, with the idea that such cables could replace USB and even the faster USB-C cables. 

Even at shorter lengths, such cables could be exceptionally useful for transferring data between more than one chip inside a device. The thinner fibers could be used to revolutionize these applications as well, making even smaller and more efficient devices possible. 

We Have the Power

The problem as it currently exists is that transferring data through copper cables consumes more and more power, to the point of diminishing returns, and such transfer generates heat – a lot of heat that must be dissipated and can actually cause damage to cables. 

The fiber optic alternative is not always compatible with silicon chips without the light to electronic transfer mentioned above. The idea behind polymer plastic is to save energy, generate less heat, and still allow for compact connections. 

If this is such a great idea, why is it not on the market yet?

From Laboratory To Market

To transfer such technology from the lab to the market takes a lot of work and requires some potential changes. First, the technology needs to be tested and perfected at a higher level. Since the concept has been established, other labs are now working on it as well, and this could be the fastest part of the process. 

But there is more:

  • New standards would have to be developed for IEEE, established, and agreed upon
  • Potentially, new connectors would need to be created for these cables to interface with other chips and other devices
  • The manufacture of new cables needs to be established at scale before they can become commonly used in any application.
  • A supply chain or the use of existing ones must be established to get cables from the plant to the end-user.

Does this sound like a lot? It is, but it has been done before. The question is, what do those who are building data centers – and would use these cables on a regular basis – think?

The Future is Now

“The need for speed has never been so great,” Bill Lambert, a data center engineer told us. “Ten years ago, no one would even have been talking about devices that would need this kind of speed. We would have told you we would never need that capacity.”

And he’s right. Many of the devices we now use every day, and their speeds would have been unimaginable before, let alone the amount of data we use. But the more we look at the uses for real-time data, the faster we need to get that information from one place to another. 

“It’s like the work from anywhere revolution,” he told us. “The last two years have totally changed what data transfer and speed look like, inside and outside of data centers. It’s a sure bet that the next few will revolutionize these ideas again.”

In an ever-changing field where speed and data matter more than ever, science has just begun to catch up with what we need. And we’re lucky enough to be a part of it. 

Have a question about updating the infrastructure in your current data center or want to learn more about building the infrastructure in a new one? Contact us here at AnD Cable Products. We have everything from the cable management you need to remote monitoring and more. 

We’re glad to be your partners going forward to tomorrow and beyond. 

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 3 Comments

How a Fire-Rated Power Distribution System Reduces Risk

How a Fire-Rated Power Distribution System Reduces Risk - AnD Cable Management Blog

Fires are not super common in data centers, but they do happen, and most often when they do, they are not reported (at least not in the news). Much of the reason for this is that fires are usually small and quickly contained. It is unusual for a data center to become fully engulfed. 

Even when such fires are reported on, details can be sketchy, causes, and investigations hidden behind NDA’s and are therefore difficult to learn from. While companies want to retain control over the narrative and how it impacts their reputation, the information around fires can and should be shared within the industry to prevent further similar events. And there are some things you can do now – such as remote monitoring – to keep your staff and facilities safe. 

Remote monitoring can help your data center keep staff and equipment safe from fire damage

The OVHCloud Incident

On 10 March 2021, near midnight local time, a fire started in the OVHCloud SBG2 data center, quickly got out of control, and even damaged two other nearby data centers. The fire started near two UPS units, one of which was worked on that same day. 

The company is considered a European alternative to the giant US cloud operators and is a key participant in the European Union’s GaiaX cloud project. Data centers serve some key functions in the French government, the UK vehicle licensing department, and others. Operations were directly impacted by the fire, although the company did have backup data centers, and quickly restored service to most customers. 

But poor design and operational practices that seem to sacrifice dependability for innovation have caused some issues, including major outages, for OVHCloud. The fire just punctuated an ongoing issue but also caused many data center operators and customers to pause and think about something probably not mentioned often enough: the risk of fire in data centers. 

What are the Fire Risks?

When broken down there are a few key fire risks common to all data centers, and most of the time they are relatively easy to mitigate.

  • Electrical Equipment – temperature changes can increase this risk, and of course, a source of risk is also backup power equipment. Generator rooms that contain gas or diesel fumes can create intense fires quickly that would be hard to fight.  
  • Cables – data center power cables are usually not enough to start a fire by themselves, but a damaged cable can release sparks or overheat and cause a small fire or thermal incident that can then spread. Proper cable management and monitoring of underfloor and overhead cabling can help prevent these events. 
  • HVAC Infrastructure – heating and cooling units present some fire danger to data centers and should be inspected often and monitored carefully. Its operation is also critical to maintaining optimal temperatures in the data center to prevent other thermal events. 
  • External Fire Sources – California wildfires. The recent blaze in Boulder. The Texas fires last year. All are examples of external fire risk to data centers, specifically those Edge data centers in less populated areas. 

Most of these can be controlled by properly managing the data center, but there are some events that can only be prepared for. Having fire suppression systems and plans in place is critical regardless of the likelihood of the danger. 

Fire Prevention Systems

Of course, the best prescription for dealing with fire is prevention. The key to this in the modern data center environment is a complete remote monitoring system. The A150 Network Monitoring System is designed specifically for data centers, IT rooms, and confidential lab operators with virtual graphics showing temperature, rack power consumption, and humidity. 

But most importantly for this topic, the system provides alerts for mission critical events like the sudden temperature changes associated with fires, smoke alarms, and sprinkler activation alerts. You can also be alerted to things like power spikes, a rise in server temperatures, or even UPS unit failures so you can make emergency repairs and mitigate fire risk before one starts. 

The reality is that anything you can do to prevent fire before it happens is preferrable than anything you can do to suppress and extinguish an active blaze. However, those are contingencies you need to prepare for. 

Fire Rated Power Distribution Systems

There are two primary principles when it comes to any fire safety plan, anywhere. They are the two P’s: prevent (which we discussed above) and protect. Part of both of these is the vital role of uninterrupted power. Enter the role of a fire-rated busbar trunking system.

These systems can be operational for a period of up to two or even three hours depending on their ratings. They’re also cased in a fire-retardant self-extinguishing resin that essentially protects the power supply itself. The idea is that this will give first responders time to extinguish the fire before it can spread.

How do you choose the right one for your data center? Well, there are established guidelines that indicate the type of fire, the duration they were tested for, how they endured water spray, such as that from sprinkler systems, and the power supply integrity in a fire situation.

Technically, they look like this: 

  • BS IEC 60331-1: 2019 – Tests for electric cables under fire conditions; circuit integrity
  • BS 8602:2013 – Method for assessment of fire integrity of cast resin busbar trunking systems for the safety-critical power distribution to life safety and firefighting systems
  • BS 6387:2013 (CWZ Protocol) – Test method for resistance to fire of cables required to maintain circuit integrity under fire conditions. Fire-resistant cables are classified by a sequence of symbols (for example, CWZ) in accordance with the fire resistance criteria they meet, the selected test temperature, and the length of the fire resistance test per BS 6387
  • NFPA 75 – Standard for the fire protection of IT equipment
  • ISO 834 – Fire resistance tests- elements of building construction
  • ATEX & IECEx – ATEX certification is given to equipment that has gone through rigorous testing outlined by European Union directives and proved safe to use in specific environments with explosive atmospheres, according to the zone/s they are certified to be used in.

The most important part of this discussion is the planning stage. It’s vital to have a disaster plan in place and address both prevention and keeping a fire from happening in the first place to protect the data center and minimize the fire’s impact. 

The more we learn from data center fires, the more likely we are to be able to prevent them going forward, and mitigate the damage in the rare event they do occur. 

Need some advice on cable management, remote monitoring, or other aspects of data center planning? Contact us – we’d love to start a conversation about how we can help you with your data center management plan. 

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Extreme Ultraviolet (EUV) Lithography – Keeping Moore’s Law Alive

Extreme Ultraviolet (EUV) Lithography - Keeping Moore’s Law Alive - Feature

In 1975, looking at the next decade, a guy named Gordon Moore revised his previous forecast of the number of integrated circuits in a microchip doubling every year to doubling every two years. Moore was not a prophet, nor a brilliant data analyst, but as his prediction held true, it later became known as a law. 

The law has become more of a guide, influencing the policies for research and development of the largest companies and chip manufacturers in the world. And it, and a new machine helping to keep Moore’s law alive, are what your iPhone and those robots from Boston Dynamics with the best dance moves have in common.

Macro photo/Shutterstock.com

Let There Be Light

First, we must understand lithography, an analogous method for making printed circuits. Technically defined, lithography is printing on a plane surface treated to repel the material being printed except where it is intended (or in the case of circuits, needed) to stick. 

The use of light for this treating and etching process is common, but one machine, built by ASML, a Dutch company that has cornered the market for etching the tiniest nanoscopic features into microchips with light, is playing a huge role in keeping Moore’s law viable. 

ASML introduced the first extreme ultraviolet (EUV) lithography machines for mass production in 2017, after decades spent mastering the technique, and the machine needed for the process is to put it mildly, massive and mind blowing. It’s expensive too, with a sticker price of around $150 million. TSMC, Samsung, and Intell are initial customers. 

Amazon Prime won’t be enough to get the massive machine delivered, unless you have 40 freight containers, three cargo planes, and 20 trucks on standby. What’s the big deal with this machine, and why does it (and it’s future children) matter?

How it Works

Think of a machine the size of a bus with 2 kilometers of cabling and over 100,000 parts. Inside are a series of nano-mirrors polished to precision that literally project extremely focused ultraviolet light into future chips to etch features that are often just a few atoms wide. That’s right, atoms. 

This means chips with components smaller (and more durable in many ways) than they have ever been. Smaller chips that are just as powerful, nano-sensors that are just as sensitive or accurate in a fraction of the space they take up now, and more will enable chips to get tinier, lighter, and more powerful than ever before. 

The Moore’s Law Limit

How small can chips get? Some think that Moore’s law is reaching the point where it is no longer viable, for three key reasons

  • Electrical leakage – As transistors get smaller, they at first become more efficient, but as they have reached nano-size, the transistor often can’t handle all of the electricity, and that means heat, and heat means potential damage to the transistor and maybe even the entire chip in some circumstances. Therefore, we can only decrease the size of a chip as we increase cooling power.
  • Heat – The electrical leakage and resulting heat means that one of two things must be limited: the amount of voltage or the number of the transistors in a given chip, thus limiting the power. The technology of Extreme Ultroviolet Lithography may offer some help in this area, but that remains unknown.
  • Economics – The price of this machine is just one factor. As chips get hotter and need more cooling the cost of keeping a data center at a viable temperature goes up, and that cost must be passed on to someone, generally the consumer. And businesses also want to extend the life of their equipment, ensuring it lasts as long as possible. Faster equipment with a shorter lifespan may not be as appealing to the average buyer or data center manager.

What does all this mean when we break it down?

Well, the data center of tomorrow may be a fraction of the size of those we have today. Or it may be equally as large, but able to store and deliver data at rates we can’t even imagine. Equipment, servers, remote sensors, everything may keep shrinking, to a point. But there will be a point when Moore’s law will no longer be valid or achievable, and that day may come sooner rather than later.

Are you running the data center of today, but looking forward to the data center of tomorrow? Are you interested in the latest remote monitoring and cabling solutions? Contact us at AnD Cable Products. We’d love to talk about what tomorrow looks like, and how we can help you head the right direction today. 

WHITEPAPER – Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs

Optimizing Server Cabinet Rack Space to Maximize Efficiency and Reduce Costs FREE Guide - AnD Cable Products

Smart optimization can help you increase rack space and realize significant equipment cost savings. Read our step-by-step guide that shows you how – and how much you could save.

  • How Much Rack Space You Could Save
  • How to Optimize for Maximum Efficiency
  • Savings for New and Retrofit Installations
  • Overall Cost and Space Savings Post-Optimization

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 1 Comment

Edge Data Centers – Space and the Final Frontier

Edge of space - Edge Data Centers - Space and the Final Frontier - Cable Management Blog

Computing on the edge: it seems that everyone is doing it, from big industry to manufacturers, from ISPs to Cloud Computing centers. When you can locate computing and analytics power closer to machines connected via the IoT and other data sources, the faster you can gather data, and the more data you can store and analyze in a timely manner. For some, edge data centers seem like the final frontier for data.

Feature Edge Data Centers - Space and the Final Frontier - Cable Management Blog

This has resulted in data centers that vary in size, from the size of a very large cabinet to those contained in the space of a small shipping container. But like any journey to the edge, there are challenges and risks. There are two primary ones we will address here:

  • Temperature – Because of the small spaces edge data centers often occupy, airflow and temperature control can be tricky.
  • Space – Smaller size also means that saving space is critical, and on the flip side, can also enable more airflow and indirect cooling in a confined area.

In this way, the two primary challenges are related, and often a solution that mitigates one will also help mitigate the other. Let’s take a quick look at each of these.

Controlling the Environment on the Edge

The temperatures that edge data centers operate at are critical. And there is a huge difference between the cooling we need for a building designed to keep people comfortable, and a building designed to serve machines. Think of it this way: if someone opens the door to your office, you may feel a blast of warm or cold air, depending on the time of year. Your discomfort disappears quickly when the door closes, as the HVAC system takes over, and brings air back into the broad temperature tolerances humans can endure.

However, what happens when you go to an edge data center and open the door? The answer is, it depends on where it is. Large, brick and mortar data centers can be located in areas with minimal environmental challenges and low risk of natural disasters. But edge data centers must be located, well, where they are needed. That means in dusty and dirty environments, areas with extreme temperature fluctuations, and more.

There are really only two choices:

  • Develop and deploy equipment designed to withstand extremes, at a higher price point. A good example is cellular equipment like that developed by AT&T. However, the cost of this equipment is too high for standard edge data center deployment at scale.
  • Work with existing, readily available equipment and use unique strategies to combat environmental changes at a small scale, including using tents or shrouds for entry and exit, using handheld temperature and humidity monitors to evaluate current conditions, and developing strategic plans for unexpected events.

Another part of the solution is to use remote monitoring, AI and the IoT in edge data centers to mitigate the need for human intervention. Monitoring the health of equipment and preventing disaster in the first place is one of the keys to efficient management of edge data centers.

This is but one of the challenges data center managers face. The second is the efficient use of available space.

Saving Space

While cooling and environmental control are critical, so is the efficient use of space. This can result in increased airflow and easier HVAC solutions while also enabling more servers to be installed in the same amount of space.

This involves a few key steps:

  • Rack Selection – Whether a data center uses 23” or 19” racks, there are rack solutions that take up less space, and are also able to use better rack management options.
  • Cable ManagementZeroU horizontal cable managers makes more room for servers in a single rack, and they prevent the “spaghetti mess” that can happen in server racks, and be especially problematic in edge data centers that are more compact.
  • Compact Vertical Cable Management11U cable managers also save space and keep cables organized and easy to access should moves, changes, or repairs be needed.

Anything that can be done to save space in an edge data center makes facing the other challenges related to environmental control easier, but it also has another impact: an economic one. The less space you need to get the computing power you need, the more compact your data center can be. Alternatively, this can give you space to scale as needed without creating yet another data center space.

At the edge, there are always challenges, but there are also solutions. From controlling the environment in and around the data center to using the space in the most efficient way possible, with the right equipment, these obstacles can be transformed into opportunities to change not only how much data is collected and how quickly it can be acted upon, but where it happens as well.

Do you have questions about saving space in your edge data center? Are you looking for remote monitoring solutions? Then contact us here at AnD Cable. We’d love to start a conversation about how we can help you.

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff – Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cable labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and rack space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online at https://andcable.com/shop/

Posted on 3 Comments

7 Considerations When Choosing Fiber Optic Cable

7-Considerations-When-Choosing-Fiber-Optic-Cable-Feature-Image

Fiber optic cable has become the go-to choice for a variety of applications by data center managers. The reasons are many, including advances in cable technology that make it an even better choice. But there are several things to consider when choosing fiber optic cable to ensure it’s the right fit for the application. Here are seven of the most important ones.

Jump to Section:

  1. Distance
  2. Interference
  3. Bandwidth
  4. Security
  5. Cable Size
  6. Cost
  7. Durability
Choosing Fiber Optic Cable - Discover 7 Considerations - Cable Management Blog
Choosing Fiber Optic Cable – Discover 7 Considerations

Distance

One of the big advantages of fiber optic cable is the loss factor: fiber only loses 3% of data over 100 meters compared to much greater losses with copper cables like CAT6 cables. While copper may be a great choice for short distances, the longer the cable needs to be, the bigger advantage to choosing fiber optic cable.

So the first factor to consider when choosing fiber optic cable is the distance the data must travel.

Interference

Fiber is fully resistant to interference from various sources like power lines, lightning storms, and even deliberate scrambling and disruption. So while the first consideration is how far the data must travel, the second consideration is where the data may travel. In data centers, whether cables are managed by running overhead or the less common instance of running through underfloor spaces, there can be sources of interference in or near that path.

This is also true in edge data centers, where everything is more compact and closer together. This is also true in modular data centers, and the right fiber cable can ensure that you can scale quickly and easily as needed. As we move toward collocation and hyper scaling, this becomes even more important.

Bandwidth

Data centers must be prepared for the future, and the bandwidth your cables can handle is a big part of that. For instance, the rise in the use of OM5 cables over OM3/4 especially in new builds is an indication that data centers are preparing for increased 5G and traffic from VR and AR applications.

This is essential to prepare for the coming 400G demands, especially in Edge data centers. As “work from home” or “work from anywhere” becomes the norm, even smaller residential data centers will be inundated with new traffic, as we saw through the COVID-19 pandemic. It seems that more companies are shifting to hybrid workforces, moving their corporate headquarters out of city center areas that are more expensive to rent, and even enabling partially or fully remote workforces.

Combine that with increases in “shopping from home” and multiple streaming devices, and speed and bandwidth are more important than ever.

Security

Of course, security is one of the top concerns for any data center. A single breach can put an entire company out of business, and result in serious issues if the data of thousands of customers is compromised. While most security issues are found in software and in the human factor (like compromised passwords) there is still a certain amount of risk in physical hardware.

However, fiber cables are difficult to compromise without the intrusion being detected, which means at the very least, using fiber cables, especially in areas where they could be potentially compromised physically, is a vital part of an overall data center security plan. Choosing the right cable in the right place can make the difference between protecting your data center’s security and digital assets, and a potentially costly data breach.

Cable Size

Over time, thinner fiber cables that carry as much data as their larger counterparts have been developed, making it practical to use fiber nearly anywhere. These thinner cables can also be bent and routed easily, saving space in your cable management systems.

Thinner cables also contribute to higher airflow and more efficient cooling, another potential area of cost savings. Fiber cables can also be bundled, organized, and labeled easily, preventing the spaghetti mess that often accumulates at the rear of server racks. Of course, this can also be prevented by having a better cable management plan in place.

In short, consider the size of cable you are using in any given area, and weigh that with other factors like distance, interference, and bandwidth.

Cost

Above, we mentioned OM5 being the future of fiber cables, but their wide adoption will come as they are produced in various lengths and sizes on a larger scale. This is because at the moment, they are produced to custom specifications. However, as OM3/4 are still viable and compatible with OM5, you can update your data center in incremental stages, and still utilize the less expensive OM3/4 cables as needed.

You’ll want to weigh cost against performance. Yes, OM5 is the best way to prepare for the future, but that can be done in cost-effective stages as your data center changes and grows. Replacing cables when you are doing moves and changes, or a new build will save you money in the long run.

Durability

Choosing fiber optic cable is easy when it comes to durability, as it’s an extremely durable cable for the most part. It is important that you evaluate where and how the cable is being used when choosing the proper cable. Where bends happen, and in an area where there may be more moves and changes than normal, you will want the most durable cable for that application.

Fiber comes in different diameters and insulation levels, and so you should be sure to choose the right one for that particular application. Evaluate several ways you can improve cable use to increase efficiency and scalability.

When choosing fiber optic cable that’s the best fit in any given application, be sure to take all of these factors into consideration. Need more information? You can check out some of the great information on our blog and in our various white papers, but if you still have questions, reach out to us. We’d love to start a conversation about how we can meet your data center cabling needs at any scale.

Ultimate Data Center Cable Labeling System

Ultimate Cable Labeling System - Epson Labelworks PX Printers and AnD Cable Products UniTag Cabel Labels
  • Maximum efficiency – our system bundle gives you everything you need, portable to where you need it
  • Industry-leading savings – both products have been designed to save you money, every time you use them
  • Clarity and transparency – printed labels ensure clarity and generous plastic cable label sizes allow for 3 lines of information
  • Positively impact uptime – combined, they make cable identification and troubleshooting quick and easy
  • Enable color cabled runs – UniTag color options align to ANSI/TIA 606-B Cable Labeling Standards
  • No-risk! Purchase with 100% confidence – both top-quality products are backed by a Lifetime Warranty
  • Download the Ultimate Cable Labeling System Brochure

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/

Posted on 5 Comments

How to Prevent Data Center Downtime

Data center downtime is no joke. It can literally make the difference between a data center surviving and failing. And a new study by the Ponemon Institute shows that modern data centers and data centers at the edge are more susceptible to downtime than ever before. This is because data centers are much more complex than they ever have been. Most core data centers suffer 2.4 facility shutdowns per year, and some of those last around 138 minutes – more than two hours! Edge computing data centers experience twice as many shutdowns, but that average half the duration of core data center outages.

In addition, it is helpful to remember that although total facility failures occur with the least frequency, individual server or rack failures can also be costly, especially in Edge data centers, where every piece of equipment has some critical function.

At the outset it is also important that we define core data centers and edge data centers. Edge data centers are usually about ⅓ the size of their counterparts, although the term edge does not refer to size. Edge refers more to the data center location, generally closer to where the data center is needed to increase speed and response times, and save bandwidth.

Jump to Section:

Data Center Downtime

Conflicting Priorities

Data center managers are faced with decisions about efficiency, the transition to Net Zero carbon emissions, and avoiding redundancies whenever possible, but this also can leave them susceptible to downtime events if a problem occurs. This is illustrated by the causes of downtime: UPS battery failures, human error, equipment failures, other UPS equipment failures, and cyberattacks.

Respondents to the survey revealed that over half (54%) are not using best-practices, and that risks of data center downtime are increased because of cost concerns.

The Cost of Downtime

While cost concerns often increase the risk of downtime, the actual cost of downtime can be much greater. According to a 2014 survey by Gartner, facility downtime cost an average of nearly $5,600 per minute, or between $140,000 to over half a million dollars per hour depending on the organization size. These costs continue to rise, with more recent statistics from the Ponemon Institute survey mentioned above calculating average costs at nearly $9,000 per minute.

It’s more than just the money costs though. The real cost comes in reputation and customer service. Data centers that suffer above average downtime are much more likely to go bankrupt. Uptime is perhaps more critical than it ever has been, and customers remember problems far more easily than they remember reliable service over time.

So what do we do to prevent data center downtime?

How to Prevent Data Center Downtime

There are solutions to downtime issues, and many are known to data center managers. However, they are easier said than done. Here are a few of them:

  • Adopt best-practices – The fact that most data centers know they are not following best-practices reveals they know what to do, they are just not doing it.
  • Invest in new equipment – Equipment failures come in outdated equipment not up to the current needs of the data center. Replacing it is one of the easiest ways to reduce or eliminate downtime.
  • Improve your training – Be sure that all employees, both existing and new, are aware of best practices and what you expect of them on the job. Make training comprehensive and focus on outcomes and skills that build long-term success.
  • Improve your documentation – Your data center plans, including power, cabling, cable management plans, and others should be thoroughly documented and available to employees. If not, in the words of Captain Picard, “Make it so.”
  • Don’t fight redundancy – Redundancy is a good thing for the most part. You certainly don’t want to overdo it, but you do need to have contingency plans and equipment in case downtime does happen.

Of course, these solutions are simplified, nor are they always possible for data center managers to achieve with the resources they have available.

There is Room for Improvement

The takeaway from this data is twofold. First, data center downtime at these rates are unacceptable for most organizations. The second is that there are solutions, and there is plenty of room for improvement. Among the solutions mentioned above, there are some critical elements.

  • Redundancy – This has been preached from the beginning for both core and edge data centers, yet half of data centers have issues in this area. As a result, there is a trend toward more redundant equipment, especially at the edge, as large and small operations seek to better manage data center downtime.
  • Remote monitoring systems and AI – The other advancement that seeks to solve the issue of human error and detect equipment issues before they become a problem is remote monitoring and AI. Machine learning can help data center managers fix issues before downtime occurs, and helps them respond faster when a problem does occur.

Simply adding these two things can take data centers a long way toward greater uptime and more reliable service. After all, this is the goal of both core and edge computing.

Whether you manage an existing data center or you are considering starting one from scratch, we here at AnD Cable Products are here for you. We can help you with everything from cable and rack management to labeling systems and remote monitoring. Have questions? Contact us today. We’d love to start a conversation about your specific needs.

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/

Posted on 1 Comment

Modular or Traditional Data Center Builds – Which is Better?

Modular or Traditional Data Center Build - Which is Better? Cable Management Blog

“The future of data centers is modular,” one popular website states. “The traditional data center is not dead,” says another. Who is right? What is the future of data centers? Is one better than the other, and if so, why?

Here are some pros and cons, and some potential answers data center managers might want to consider. 

Jump to Section:

Modular and Traditional Data Center

Modular vs. Traditional Data Center PUE

One of the first things we talk about with modular or traditional data centers is PUE, or Power Usage Efficiency. Most of the time, modular data centers have a lower PUE. However, there is a cost associated with that number.

Traditional data center builds often have a higher PUE initially because there is space for expansion and adding additional equipment. This can sometimes come with higher HVAC and other costs until the data center is at capacity and running at maximum efficiency. We’ll talk about that factor more in a moment. 

For modular data centers, because they are constructed with tight specifications and already at an efficient capacity per module, the PUE is lower from the start. All components are easily matched, and compact spaces are easier to control when it comes to cooling, humidity, and other factors.

What is the downside? When a brick and mortar data center is up and running at capacity and the design has been well executed, PUE levels can be similar, and it can be much simpler to make moves and changes without additional modules and construction. 

Security

As with PUE, there are two sides to this coin. The modular data center can be easier to secure, as they are more compact and self-contained. When installed behind a secure barrier with video and other surveillance measures, the physical security of a modular data center can be assured.

The flip side? Modular data centers may evolve and require additions over time, meaning the physical space will also have to be modified. Proper planning can mitigate this issue, but a traditional data center build can be easier to manage from this perspective, with security built into the construction itself, along with remote monitoring and other security features that must be handled differently with modular data centers. 

The argument over which is better can go either way, but the permanence of a traditional data center build often wins out when it comes to security discussions.

Modular vs. Traditional Real Estate

When locating a data center, we have talked about things like accessibility to a green power grid, the ability to construct your own green energy backups, and more. Real Estate that satisfies all of those requirements can be hard to find, and prices reflect that premium. 

So in this case, the more compact modular data center build has some distinct advantages. The less real estate you need, the less initial costs will be to purchase (or lease) space for the data center. This also has an impact on another factor: the cost of the build. 

Building Costs

Constructing a modular data center is much cheaper than constructing a traditional one, effectively 30% less. That is a huge number when you talk about initial costs. Combined with the lower cost of real estate, deploying a modular data center is much more efficient for those looking towards hyperscaling and co-location. 

This has been shown to be especially true as more “work from anywhere” options become available, and the need for high-speed data center capacity shifts from city centers and similar areas to residential and suburban ones.  

This leads us to our next advantage:

Deployment Speed

The time needed to construct a traditional data center is much greater than that needed to deploy a modular data center. The average data center takes 18-24 months from start to finish, but you can save around 30% of that time by going the modular data center route.

In part, this is because you avoid traditional construction delays due to bad weather, seasonal construction, and more.  

This is not to say modular deployment is better – it is simply faster. It could be argued that a traditional build will last longer and the overall construction will be of higher quality, but that is not always the case. Many modular data centers are created with a similar lifespan in mind and can last just as long as a traditional build.

So which one is better?

The bottom line when it comes to which one is better, a modular or traditional data center build, the answer is, it depends. Ask these questions:

  • How urgent is the need for this data center? 
  • Where will the data center be located? What is the costs associated with a greater holding of real estate?
  • What is the purpose of this data center? Is the need for moves and changes anticipated?
  • What kind of security is needed, and what is possible in the data center location?
  • What is the long-term plan for this data center?

The answers in your situation may vary, but as much as the traditional data center is not dead, the modular data center is on the rise, and for many situations, it’s the best option. 

No matter whether you are creating a modular data center or doing a traditional build, your rack and cable management matter, as does your labeling system and your physical layer security. At AnD Cable Products, we can help with all of these things. Give us a call today, tell us about your situation, and we’d be happy to have a conversation about how we can help

We’re here for all of your data center needs. 

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/

Posted on 1 Comment

4 Steps to Prepare Your Data Center for Net Zero Carbon Emissions

4 Steps to Prepare Your Data Center for Net Zero Carbon Emissions - AnD Cable Management Blog

The race to net zero carbon emissions is on – our economy and our world depend on it. The data center industry, one that tends to gobble up lots of power is at the forefront of a number of initiatives being implemented around the world. How will you prepare your data center for net zero carbon emissions? Here’s 4 steps that will get you started.

Jump to Section:

4 Steps to Prepare Your Data Center for Net Zero Carbon Emissions - AnD Cable Management Blog

The Situation Now

Data centers first became the focus of Greenpeace and other groups back in the mid-to-early 2010 boom. The focus at that time was on enterprise-level data centers – the big guys, in other words. The fact that data centers used lots of power became evident and so the move toward minimizing that impact grew in both urgency and popularity.

So much so that the position of Chief Sustainability Officer (CSO) grew to overtake the emerging position of Chief Security Officers. Those in charge of cybersecurity ended up having to change their title to CISO (Chief Information Security Officer) because CSO had already been widely recognized.

Since then, smaller edge computing data centers have become the new focus. With COVID hastening the transition, today, cloud computing, AI, remote monitoring and other data center management trends have now established a level of control and sustainability not previously thought possible.

Thanks in part to these technological developments, net zero carbon emissions now feels more achievable and less like the plot-line in a futuristic sci-fi flick. So, in what areas of the data center can emissions be reduced?

Step One – Make a Commitment

There are several individual measures that promote sustainability. The key is to take all of those individual components and standards and work them into an ecosystem that supports your emissions goal.

Companies and countries alike are making a commitment to reducing carbon emissions as a part of their brand. However, words are not enough, and these companies – including the largest hyperscaling data center companies in the world – are taking action. Advances are happening quickly in the area of artificial intelligence (AI), remote management and data center design. Being on the leading edge of these developments shows that your data center is part of this commitment to a “greener cloud infrastructure.”

Like most strategies, it’s only once a firm commitment at the top of the organization has been made that the necessary actions can be taken, including providing leaders with the authority to make decisions that align with the goal and that resources, responsibilities and accountabilities are allocated.

Step Two – Use Sustainable Energy Sources

Solar, wind and even hydroelectric power are all sustainable sources of power that can make dependence on coal and other carbon-intense fuels a thing of the past. Companies like Tesla and Microsoft are testing and deploying battery technology that can run data centers longer than ever before, even with no sun or wind available.

This means only using the local grid as your primary power source if sustainable energy is available 24+7. Otherwise, the data center will need to provide at least some sustainable sources of its own, like a solar or wind farm designed to directly support a data center.

Because this is expensive, only the largest, hyperscale companies with large data centers can be 100% self-sufficient. Hybrid solutions could help to bridge the gap, such as supplementing local power supplies with solar and wind on site. Selecting a site that’s close to a local, sustainable power grid should be a factor in choosing where to locate your data center and will support the goal of net zero carbon emissions.

Step Three – Operate at Peak Efficiency

For data centers, not only should your power be sourced responsibly, but your data center needs to operate efficiently. One way to reduce carbon dependency is to simply use less power. Strategies can include initiatives such as pro-active device monitoring to identify ‘Zombie Servers’ – stacks that contribute little to performance, but still use significant resources to maintain.

The efficient and responsible use of power, as covered by the United Nations Sustainable Development Goal 12, is about Responsible Production and Consumption. Some other standards include:

  • PUE (Power Use Effectiveness) – Determined by dividing the power coming into the data center by the power used by the computer infrastructure.
  • LEED (Leadership in Energy & Environmental Design) – A green building certification program that rates building design
  • PAR4 – A new form of power measurement that accurately measures IT equipment power usage to help optimally plan for capacity.
  • ASHRAE (The American Society of Heating, Refrigerating and Air-Conditioning Engineers) – Standards for the temperature and humidity operating ranges for IT equipment and data centers.
  • CCF (Cooling Capacity Factor) – CCF is a metric used to evaluate rated cooling capacity against how that capacity is actually used.

These standards are just a few of those used to rate the efficiency of a data center and are designed to help data centers move toward net zero through more effective use of the power they have available.

Step Four – Build-in Resilience and Agility

The real job of a data center is uptime. Yes, customers want a green data center that is moving toward, if not already achieving, net zero emissions. However, at the same time, they expect that there will be no reduction in service. They expect full uptime, speed, and data protection.

This means that systems must not only be green, but must be reliable and include redundancies, power backups, and other protections, including cybersecurity and physical layer security to protect both customer assets and their data.

The good news is that not only is clean energy better for the environment, but it is also more reliable in many cases, allowing data centers to keep uptime near 99.999% standards. This is a balance that sustainable data centers must constantly monitor, adjust to and plan for.

Net zero carbon emissions is the standard of the future. Your data center can prepare now. Use clean energy, and plan to scale with that energy use in mind. Use the energy you have efficiently and plan for resiliency as part of your transition strategy. It’s what clients and customers and the world deserves.

Have questions about optimizing your physical layer, monitoring and remote control or ways to use your floor space efficiently? Contact Us at AnD Cable Products. We’re here to help.

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/

Posted on 1 Comment

8 Critical Data Center Practices for Floor Design and Delivery

Woman Drawing Data Center Floor Plan Designs on Glass Wall

The physical layer of the internet, the data center, is largely dependent on floor plans, not only the floor design and type of the floor itself, but where you put everything, and how that impacts data accessibility and delivery for your clients.

Perhaps the most important feature of any data center is agility and flexibility. To prepare for the future, floor plans must have the ability to adapt built-in. How do we get there?

Jump to Section:

  1. Density and Capacity
  2. Prepping for Future Architecture and Changes
  3. Storage and Cooling
  4. Building Management Systems
  5. Built-in Redundancy
  6. Remote Management
  7. Physical Layer Security
  8. Using Renewable Energy
Woman Drawing Data Center Floor Plan Designs
To prepare for the future, data center floor plans must have the ability to adapt built-in

1. Density and Capacity

First, we must think in terms of both density and capacity: there is always a tradeoff between power and space. A denser server system will require a more sophisticated power and cooling strategy, which may in the long run be more costly per watt than a less aggressive approach.

The most common answer is a blend of both high and low-density rack layouts to get the maximum benefits of each. Modular density allows for the addition of capacity over time and with energy costs higher than the cost of space (at least currently) a less dense approach makes more sense for most applications and data center floor designs.

2. Prepping for Future Architecture and Changes

This brings us nicely to the next point. Server configurations are constantly changing, and likely will continue to do so going forward. Balancing density and capacity when it comes to data center floor design makes it easier to make moves and changes when the need arises.

A forward-thinking floor design simply means you are ready when whatever technology is next taking over the market. Think of how your current layout can be adapted to new forms and layouts.

3. Storage and Cooling

This naturally leads to storage and cooling, which is directly related to density and capacity, and future thinking. You must consider how you will store data, what kind of servers and racks you will use, and even where you will source them and other materials.

A part of that will also be your cooling plan. How will you cool your systems? Will you have an underfloor wiring plan or an overhead one? What kind of floor will you have? What will your HVAC system look like, and how will access to the building be controlled? This is all something to think about while looking at your floor designs.

4. Building Management Systems

What does your building management system look like, and how well does it meet your data center needs. There are several aspects to consider, including your maintenance services

  • Generators
  • UPS Batteries and backups
  • Electrical supply infrastructure
  • Mechanical systems maintenance

All these various pieces require various levels of maintenance, and physical accessibility must be a consideration. This also leads to our next point.

5. Built-in Redundancy

When maintenance occurs or disaster strikes, redundant systems need to be in place to keep the stellar uptime customers demand. This must be a part of your data center floor design from the start. This is a part of not only data center service, but physical and data layer security as well.

6. Remote Management

If 2020 has taught us anything, it’s that people can do many things remotely in an amazing way. While remote monitoring and even management of data centers have been possible for quite some time, the pandemic propelled it to a mainstream priority. Any data center design conceived going forward must be structured to enable remote monitoring and management.

This goes relates to everything, from building management systems to server management systems. Sensors can detect when something is wrong, in many cases take action to correct the issue, and inform human data center management of the issues, so that permanent corrections can be implemented.

7. Physical Layer Security

Of course, a part of remote management leads to physical layer network security. This includes everything from digital locks for entrances with biometric security in place to door alarms, AI monitoring of camera systems, and more.

These systems are far better than an on-premises security team, can be monitored from anywhere, and both managers and if necessary the appropriate authorities can be notified of any incident requiring attention.

8. Using Renewable Energy

Finally, an important part of data center management and development going forward is the use of renewable energy. While this does not always impact the physical layout of the interior of your data center, it may impact your power and electrical configurations, the redundancies you need to have built into your data center, and the area you have to expand the physical footprint of your data center going forward.

A big part of your data center floor design and how you arrange both high and low-density areas of the data center is related to the server racks, cable management products, and physical layer security systems you choose.

At AnD Cable Products, we can make sure you have everything you need to set things up properly from the start or to make moves and changes as you need to. Contact Us today! We’d love to discuss your data center needs.

Physical Layer Environment Network Security Monitoring and Control

A150 Physical Layer Environment Network Security Monitoring and Control System Brochure

Full visibility, network security and control of your physical layer environment. Monitor your entire hybrid cloud and IT infrastructure from a cloud-based, integrated dashboard:

  • Introducing the A150 System
  • A150 System Architecture – High-Level Overview
  • A150 System Features
  • System Controller Hardware and Specifications
  • Monitoring Controllers, Probes and Sensors

About the Author

Louis Chompff - Founder, AnD Cable Products, Rack and Cable ManagementLouis Chompff, Founder & Managing Director, AnD Cable Products
Louis established AnD Cable Products – Intelligently Designed Cable Management in 1989. Prior to this he enjoyed a 20+ year career with a leading global telecommunications company in a variety of senior data management positions. Louis is an enthusiastic inventor who designed, patented and brought to market his innovative Zero U cable management racks and Unitag cabel labels, both of which have become industry-leading network cable management products. AnD Cable Products only offer products that are intelligently designed, increase efficiency, are durable and reliable, re-usable, easy to use or reduce equipment costs. He is the principal author of the Cable Management Blog, where you can find network cable management ideas, server rack cabling techniques and space saving tips, data center trends, latest innovations and more.
Visit https://andcable.com or shop online https://andcable.com/shop/