Why the Data Center Liquid Cooling Space is Heating Up

Screen Shot 2017-10-20 at 13.59.42Latest column for Data Center Knowledge is on direct liquid cooling (DLC). I’ve been researching and writing about DLC for several years now but it finally feels like there is demand-side momentum building to match the fervour from suppliers. The pros and cons of the various flavours of DLC still stand but with support from all of the main server OEMs, adoption by some hyperscales, and even interest from colos, liquid cooling may be finally reaching a tipping point beyond traditional HPC.

Data Centers Will Invest in Renewables Despite the EPA, Not Because of It

Screen Shot 2017-10-12 at 08.13.27

This week’s column for Data Center Knowledge looks at the EPA’s recent decision to repeal the troubled Climate Power Plan. A number of large hyper-scale operators have come out in support of the plan which they claim would help them, and wider industry, with future renewable investments.

It has been pointed out that these hyper-scales invest in renewables because of tax incentives to do so rather than out of a sense of altruism. That is accurate up to a point. The hyperscale operators admit themselves that ““Renewable energy is less subject to price volatility than non-renewable energy; provides greater long-term cost certainty to its purchasers; and, in many parts of the United States, is available at prices comparable to or better than the current prices for other electricity options”. But as Greenpeace – which has been highly critical of some datacenter operators use of renewables –  and others have pointed out some large operators have helped to make renewable energy tariffs more widely available and increase use of renewables by other industries.

There is a lot of complexity around the sourcing and distribution of renewables but the decision by the EPA to roll-back the Climate Power Plan still looks like a retrograde step.

AI and Edge Could Usher in a New Era of Liquid-Cooled Data Centers

I recently spoke with Tate Cantrell the CTO over at Verne Global about his views on direct liquid cooling in datacenters. The article is posted on Verne’s site. Tate, like many other datacenter technologists, tends to take a pragmatic view of liquid cooling but says the colocation provider is happy to accommodate DLC if it’s what the customer wants.

There are good reasons to think that more customers will want to use DLC technology as high-density workloads associate with machine learning and deep learning proliferate. That’s good news for the 20-plus suppliers of specific DLC technology as well as all the main server OEMs which have licensed that technology or developed their own in-house. Some edge deployments – especially ruggedised, non-whitespace micro-modular designs – could also be good candidates for DLC and several suppliers are also targeting this use case. Screen Shot 2017-10-09 at 10.44.35

Datacenter resiliency: It’s the network, stupid!

Screen Shot 2017-10-06 at 11.19.19

My latest column over at Data Center Knowledge picks through some of the latest thinking on datacenter resiliency from experts such as Uptime Institute as well as the actual resiliency strategies deployed by some cloud service providers.

The upshot is that some cloud service providers are investing heavily in new approaches to IT and datacenter resiliency based on the use of distributed, virtualized applications, instances, or containers using middleware, orchestration, and distributed databases across multiple datacenters.

“The evolution of compute from single-site data centers with proven in-house engineering to a multi-site hybrid computing model is enabling distributed models of resiliency that can achieve high availability and rapid recovery alongside improved performance,” Uptime Institute stated in a recent webcast. “The recent move by many to the public cloud is further accelerating this cloud-based resiliency approach. The benefits are potentially vast.”

Now, an easy conclusion to jump to would be that improved resiliency at the IT level should mean that the importance of individual datacenter is lessened.  As such it should be possible to design and build new facilities with less redundant M&E equipment: fewer generators, UPS etc. However, while that might be possible in some instances – perhaps in edge datacenters – the reality is that most operators continue to build to Uptime Institute Tier III or equivalent (sorry Uptime I know there is no real equivalent).

As the outage this week in Microsoft’s Azure’s service shows even cloud service providers with advanced resiliency in place – Microsoft recently introduced Availability Zones – aren’t immune to downtime. True, Microsoft said that “customers with redundant virtual machines deployed across multiple isolated hardware clusters would not have been affected by the outage,” but it seems some customers were affected.

The take-away is that for now advanced approaches to redundancy – based on software and networks – should probably viewed as an additional “belt” to support the existing “braces” of conventional single-site redundant M&E and IT.

For more on the issue check out this very good webcast overview of Uptime’s current thinking on advanced resiliency.

Are DCIM’s days numbered?

My latest column over at Data Center Knowledge looks at how datacenter infrastructure management (DCIM) software is evolving from on-premise tools to cloud-based services. But what will that mean for suppliers of conventional DCIM in the long term? Will the large number of datacenter operators that remain unconvinced by the whole DCIM concept be convinced by the new features – such as predictive maintenance and advanced remote management  – that these new services promise to provide?

header

Uptime Institute recognises that not all datacenters are unique snowflakes with Tier-ready prefab certification

As Brad Pitt’s character Tyler Durden says in the cult-movie Fight Club: ““You are not special. You’re not a beautiful and unique snowflake.”

Uptime Institute TIER-ReadyUptime Institute, which I worked closely with at 451 Research, has recognised that the same sentiment increasingly applies to the datacenter industry.

Uptime is working with datacenter technology suppliers to apply its Tier-certification scheme to prefabricated modular (PFM) datacenter designs. PFM is a rather vague catch-all term for chunks of datacenter M&E equipment – or entire sites in the case of containers – which is usually manufactured and integrated off-site speeding up deployment times whilst also hopefully improving quality to boot.

PFM designs still make up a small percentage of overall datacenter capacity but there is growing interest in some of the same developing markets where Uptime is also enjoying strong growth for its Tier-certification services. The move is a recognition that the direction of travel in the industry will be towards standardisation and industrialisation and away from highly-bespoke datacenters where every site was a beautiful unique snowflake.

Uptime is working hard to evolve and develop its services to keep up with the frenetic pace of change in the datacenter industry and this latest announcement is part of that process.  Uptime also recently announced that it is launching its own research service to provide its members with insight and advice on all facets of datacenter design.

451 Research report: Vertiv aligns data-driven datacenter services with its software plans

Another in the series on the datacenter services plans of large technology suppliers. This time the focus is on Vertiv and how it has evolved its services strategy since the spin off from Emerson Network Power. It has plans to follow Eaton and Schneider Electric into cloud-based datacenter infrastructure management (DCIM) tools which 451 refers to as Datacenter Management as a Service (DMaaS). However, the company says it has had very sophisticated remote monitoring and management tools for more than ten years. (For 451 subscribers)