EU RenewIT tool is a Datacenter Dynamics awards finalist

Screen Shot 2017-11-10 at 17.00.22

The European Union funded RenewIT tool is a finalist in the DCD Awards.

I was part of the team that helped to develop the tool which was recognised by the EU as one of the most successful datacenter projects it has ever funded.

The free online tool enables different datacenter locations to be compared across Europe in terms of energy efficiency and availability of renewables. The tool took three years to develop and involved organisations from across Europe.

RenewIT is a finalist for the Best Datacenter Initiative of the Year (sponsored by DC Pro). The awards will be announced at a gala dinner on Thursday 7 December 2017.

Screen Shot 2017-11-10 at 16.51.41

 

Watching the data centre detectives

Screen Shot 2017-11-03 at 12.55.35My latest feature for Datacentre Dynamics has been published. I have to say the layout is excellent – just hope the content lives up to it. Big thanks to James Wilman over at Future-tech and Steve Carlini from Schneider Electric for contributing to the article.

The article looks at the process of investigating the causes of unplanned downtime. The fault might be in the mechanical and electrical systems but the detective work often begins at the server level and tracks back up the power chain. You can access a digital version of the article via the DCD website.

AMD’s plan to get back into data center contention

Screen Shot 2017-11-03 at 09.40.06

This week’s column for Data Center Knowledge looks at AMD’s plan to get back into contention in the data center. I spoke with the chipmaker’s de facto head of data center strategy Forrest Norrod. The interview was obviously heavily skewed towards silicon roadmaps but Norrod has had previous roles at Dell which included a more holistic view of the data center. We were able to talk cooling technologies, data center software management tools and rising power densities.

Is there an ideal place for every workload?

The idea that everyone has a perfect job, or niche, based on their personality and skill set could either be seen as a positive or potentially constraining. My latest blog over at Verne Global applies that idea to computing and looks at the idea of Best Execution Venue for workloads and whether there’s really an ideal venue for every workload to thrive.

Supercomputers in the cloud erode another case for owning data centers

My latest column over at Data Center Knowledge looks at supercomputing in the cloud.

HPC is obviously a nebulous term that covers a whole range of systems from mid-level servers right up to supercompute. The disruption from cloud is probably being felt most at the lower end but it’s still notable that a supercomputer supplier like Cray is now plugged into Azure.

Screen Shot 2017-10-30 at 09.33.10The large scientific supercomputing institutions are unlikely to farm our workloads to cloud but there’s likely to be appetite from industry as demand for deep learning in fields such as automotive and aerospace ramp up.

Why the Data Center Liquid Cooling Space is Heating Up

Screen Shot 2017-10-20 at 13.59.42Latest column for Data Center Knowledge is on direct liquid cooling (DLC). I’ve been researching and writing about DLC for several years now but it finally feels like there is demand-side momentum building to match the fervour from suppliers. The pros and cons of the various flavours of DLC still stand but with support from all of the main server OEMs, adoption by some hyperscales, and even interest from colos, liquid cooling may be finally reaching a tipping point beyond traditional HPC.

Data Centers Will Invest in Renewables Despite the EPA, Not Because of It

Screen Shot 2017-10-12 at 08.13.27

This week’s column for Data Center Knowledge looks at the EPA’s recent decision to repeal the troubled Climate Power Plan. A number of large hyper-scale operators have come out in support of the plan which they claim would help them, and wider industry, with future renewable investments.

It has been pointed out that these hyper-scales invest in renewables because of tax incentives to do so rather than out of a sense of altruism. That is accurate up to a point. The hyperscale operators admit themselves that ““Renewable energy is less subject to price volatility than non-renewable energy; provides greater long-term cost certainty to its purchasers; and, in many parts of the United States, is available at prices comparable to or better than the current prices for other electricity options”. But as Greenpeace – which has been highly critical of some datacenter operators use of renewables –  and others have pointed out some large operators have helped to make renewable energy tariffs more widely available and increase use of renewables by other industries.

There is a lot of complexity around the sourcing and distribution of renewables but the decision by the EPA to roll-back the Climate Power Plan still looks like a retrograde step.

AI and Edge Could Usher in a New Era of Liquid-Cooled Data Centers

I recently spoke with Tate Cantrell the CTO over at Verne Global about his views on direct liquid cooling in datacenters. The article is posted on Verne’s site. Tate, like many other datacenter technologists, tends to take a pragmatic view of liquid cooling but says the colocation provider is happy to accommodate DLC if it’s what the customer wants.

There are good reasons to think that more customers will want to use DLC technology as high-density workloads associate with machine learning and deep learning proliferate. That’s good news for the 20-plus suppliers of specific DLC technology as well as all the main server OEMs which have licensed that technology or developed their own in-house. Some edge deployments – especially ruggedised, non-whitespace micro-modular designs – could also be good candidates for DLC and several suppliers are also targeting this use case. Screen Shot 2017-10-09 at 10.44.35

Datacenter resiliency: It’s the network, stupid!

Screen Shot 2017-10-06 at 11.19.19

My latest column over at Data Center Knowledge picks through some of the latest thinking on datacenter resiliency from experts such as Uptime Institute as well as the actual resiliency strategies deployed by some cloud service providers.

The upshot is that some cloud service providers are investing heavily in new approaches to IT and datacenter resiliency based on the use of distributed, virtualized applications, instances, or containers using middleware, orchestration, and distributed databases across multiple datacenters.

“The evolution of compute from single-site data centers with proven in-house engineering to a multi-site hybrid computing model is enabling distributed models of resiliency that can achieve high availability and rapid recovery alongside improved performance,” Uptime Institute stated in a recent webcast. “The recent move by many to the public cloud is further accelerating this cloud-based resiliency approach. The benefits are potentially vast.”

Now, an easy conclusion to jump to would be that improved resiliency at the IT level should mean that the importance of individual datacenter is lessened.  As such it should be possible to design and build new facilities with less redundant M&E equipment: fewer generators, UPS etc. However, while that might be possible in some instances – perhaps in edge datacenters – the reality is that most operators continue to build to Uptime Institute Tier III or equivalent (sorry Uptime I know there is no real equivalent).

As the outage this week in Microsoft’s Azure’s service shows even cloud service providers with advanced resiliency in place – Microsoft recently introduced Availability Zones – aren’t immune to downtime. True, Microsoft said that “customers with redundant virtual machines deployed across multiple isolated hardware clusters would not have been affected by the outage,” but it seems some customers were affected.

The take-away is that for now advanced approaches to redundancy – based on software and networks – should probably viewed as an additional “belt” to support the existing “braces” of conventional single-site redundant M&E and IT.

For more on the issue check out this very good webcast overview of Uptime’s current thinking on advanced resiliency.

Are DCIM’s days numbered?

My latest column over at Data Center Knowledge looks at how datacenter infrastructure management (DCIM) software is evolving from on-premise tools to cloud-based services. But what will that mean for suppliers of conventional DCIM in the long term? Will the large number of datacenter operators that remain unconvinced by the whole DCIM concept be convinced by the new features – such as predictive maintenance and advanced remote management  – that these new services promise to provide?

header