The sentiment in the headline is a pithy reminder of the importance of understanding the past.
The unfortunately long list of datacenter operators that suffered outages in 2017 would do well to heed those words.
Specifically, how can operators that don’t undertake a thorough root-cause analysis after an outage expect to prevent further downtime in the future?
I’ve been working with UK datacenter design company Future-tech that provides specialist forensic engineering services to help root out the causes of downtime and help harden facilities against future outages.
Head over to Future-tech’s site to see their take on the importance of thoroughly investigating the causes of unplanned downtime.
I recently spoke with Iceland-based colocation and cloud services provider Verne Global about their new HPC-as-a-service (HPCaaS) offering hpcDIRECT.
Verne’s managing director Dominic Ward explained how the hpcDIRECT was a natural extension of its colocation services but will also take the company into some new areas in the future.
“I think the balance over time will shift towards more customers wanting to consume more HPCaaS. However for now I think the balance will remain that customers will want the majority – anything over 50% – in a colocation environment while wanting to start to test our HPCaaS. But I do think there will be gradual migration in the same way we have seen that shifting for enterprise cloud environments, or enterprise applications, I do think that is coming for HPC as well”
One of the issues examined by the Uptime panel was how data center operators should respond to extreme weather events caused by global warming.
Uptime CTO Chris Brown argued that hardening facilities against extreme weather and temperatures was not the only issue. Operators also need to put the right procedures in place around data center staffing to better manage extreme weather events. “These last few storms have got people thinking about the operations personnel,” he said. “If you have a major storm coming through, people living and working in that area have their own homes, their own families, their own things to worry about. They are usually going to give those things their attention first before the data center. That is just human nature.”
My latest column over at Data Center Knowledge asks whether underground data centers, such as the recently opened Lefdal Mine facility, in Norway are becoming more commonplace.
Lefdal has taken the concept of underground data centers and run with it. The facility, backed by regional investors and Norwegian power company SFE, has potential to reach capacity of 120,000 square meters (1.3 million square feet) of data center space and more than 200MW of IT capacity. If fully utilized it would be the biggest data center in Europe.
As with other underground data centers, the organizations behind LMD – which also include Rittal and IBM – make much of the site’s physical security. However, its cooling system and access to cheap renewable energy are probably the standout features of the site.
My latest Critical Thinking column over at DataCenter Knowledge looks at edge colocation start-up EdgeMicro which wants to build out a network of container data centres on mobile towers to serve content.
EdgeMicro co-founder Greg Pettine explained to me that the company’s business model is similar in some respects to edge colocation specialist EdgeConneX but involves building out capacity in much smaller chunks.
“If you look at the EdgeConneX model, they were funded by the cable guys, by Akamai and so forth, who said we need 500kW of capacity in Tier 2 cities,” says Pettine. “Our focus is to just spread that out. As they expand – Netflix, Amazon, any of them – they will need to put more capacity in multiple markets in the next 12 months. What if we spread out and put that in the wireless edge, where most of the content is being consumed anyway?”
Its secret sauce is an appliance called Tower Traffic Xchange, or TTX. The TTX essentially creates an intelligent interface between wireless devices, wireless networks, and content — something that Pettine says hasn’t existed until now. The current process for serving content to a mobile device can be extremely convoluted and expensive.
In a session entitled ‘The race to Exascale – meeting the IT infrastructure needs of HPC’ a panel of experts discussed the benefits of achieving the next big breakthrough in supercomputing.
Peter Hopton, founder of UK-based liquid cooling specialist Iceotope argued that China is focused on being the first to exascale but is less interested in the benefits the breakthrough could bring.
“They will be able to achieve the required number of flops to say they have done it. But it’s a bit like Neil Armstrong landing on the moon and then taking off again without stepping out of the door because they didn’t build a door on the capsule,” he said.
However, speaking at the London event Brown said the technology will be more widely adopted as rack power densities increase due to the expected growth in GPUs and other factors.
Immersive DLC – where servers are submerged in dielectric fluid – can enable operators to lower cooling costs by more than 15% compared to conventional air-based cooling, according to Schneider. The company is expected to release more details of its TCO analysis in the near future.