If Software is Eating the Data Center, Power is Its Next Meal

My latest Critical Thinking Column for Data Center Knowledge looks at how even Power could eventually be software defined.

The concept of the software-defined data center has been bandied around the industry for several years now. 

Essentially it refers to the idea that all of the IT infrastructure within a facility — networking, storage, and compute — can be virtualized and software controlled.

More recently some progressive industry experts and suppliers have tried to apply the same approach to facilities infrastructure.

More at Data Center Knowledge.

EdgeMicro Plots to Disrupt Edge Colocation, 48kW at a Time

Screen Shot 2017-11-17 at 07.40.01My latest Critical Thinking column over at DataCenter Knowledge looks at edge colocation start-up EdgeMicro which wants to build out a network of container data centres on mobile towers to serve content.

EdgeMicro co-founder Greg Pettine explained to me that the company’s business model is similar in some respects to edge colocation specialist EdgeConneX but involves building out capacity in much smaller chunks.

“If you look at the EdgeConneX model, they were funded by the cable guys, by Akamai and so forth, who said we need 500kW of capacity in Tier 2 cities,” says Pettine. “Our focus is to just spread that out. As they expand – Netflix, Amazon, any of them – they will need to put more capacity in multiple markets in the next 12 months. What if we spread out and put that in the wireless edge, where most of the content is being consumed anyway?”

Its secret sauce is an appliance called Tower Traffic Xchange, or TTX. The TTX essentially creates an intelligent interface between wireless devices, wireless networks, and content — something that Pettine says hasn’t existed until now. The current process for serving content to a mobile device can be extremely convoluted and expensive.

Screen Shot 2017-11-17 at 07.35.12

More over at DCK.

Is the race to exascale really a ‘moon shot’ endeavour?

Apollo 11 Crew Suited Up

My latest blog over at Verne Global is about the race to develop the first Exascale system which was discussed at the Data Center Dynamics Zettastructure event in London in November.

In a session entitled ‘The race to Exascale – meeting the IT infrastructure needs of HPC’ a panel of experts discussed the benefits of achieving the next big breakthrough in supercomputing.

Peter Hopton, founder of UK-based liquid cooling specialist Iceotope argued that China is focused on being the first to exascale but is less interested in the benefits the breakthrough could bring.

“They will be able to achieve the required number of flops to say they have done it. But it’s a bit like Neil Armstrong landing on the moon and then taking off again without stepping out of the door because they didn’t build a door on the capsule,” he said.

More over at Verne Global.

Schneider Electric gets behind Direct Liquid Cooling

Screen Shot 2017-11-10 at 17.27.45

The CTO of Schneider Electric’s IT division Kevin Brown said he expects direct liquid cooling (DLC) to become increasingly important in the near future.

Brown was speaking at the Datacenter Dynamics, Zettastructure event in London on 7-8 November.

Schneider has previously hesitated to endorse the technology despite its $10 million investment in DLC start-up Iceotope.

However,  speaking at the London event Brown said the technology will be more widely adopted as rack power densities increase due to the expected growth in GPUs and other factors.

Immersive DLC – where servers are submerged in dielectric fluid – can enable operators to lower cooling costs by more than 15% compared to conventional air-based cooling, according to Schneider. The company is expected to release more details of its TCO analysis in the near future.

Brown referred to some of the articles on DLC from Datacenter Knowledge including one of my recent columns.

Screen Shot 2017-10-20 at 13.59.42

EU RenewIT tool is a Datacenter Dynamics awards finalist

Screen Shot 2017-11-10 at 17.00.22

The European Union funded RenewIT tool is a finalist in the DCD Awards.

I was part of the team that helped to develop the tool which was recognised by the EU as one of the most successful datacenter projects it has ever funded.

The free online tool enables different datacenter locations to be compared across Europe in terms of energy efficiency and availability of renewables. The tool took three years to develop and involved organisations from across Europe.

RenewIT is a finalist for the Best Datacenter Initiative of the Year (sponsored by DC Pro). The awards will be announced at a gala dinner on Thursday 7 December 2017.

Screen Shot 2017-11-10 at 16.51.41


Watching the data centre detectives

Screen Shot 2017-11-03 at 12.55.35My latest feature for Datacentre Dynamics has been published. I have to say the layout is excellent – just hope the content lives up to it. Big thanks to James Wilman over at Future-tech and Steve Carlini from Schneider Electric for contributing to the article.

The article looks at the process of investigating the causes of unplanned downtime. The fault might be in the mechanical and electrical systems but the detective work often begins at the server level and tracks back up the power chain. You can access a digital version of the article via the DCD website.

AMD’s plan to get back into data center contention

Screen Shot 2017-11-03 at 09.40.06

This week’s column for Data Center Knowledge looks at AMD’s plan to get back into contention in the data center. I spoke with the chipmaker’s de facto head of data center strategy Forrest Norrod. The interview was obviously heavily skewed towards silicon roadmaps but Norrod has had previous roles at Dell which included a more holistic view of the data center. We were able to talk cooling technologies, data center software management tools and rising power densities.

Is there an ideal place for every workload?

The idea that everyone has a perfect job, or niche, based on their personality and skill set could either be seen as a positive or potentially constraining. My latest blog over at Verne Global applies that idea to computing and looks at the idea of Best Execution Venue for workloads and whether there’s really an ideal venue for every workload to thrive.

Supercomputers in the cloud erode another case for owning data centers

My latest column over at Data Center Knowledge looks at supercomputing in the cloud.

HPC is obviously a nebulous term that covers a whole range of systems from mid-level servers right up to supercompute. The disruption from cloud is probably being felt most at the lower end but it’s still notable that a supercomputer supplier like Cray is now plugged into Azure.

Screen Shot 2017-10-30 at 09.33.10The large scientific supercomputing institutions are unlikely to farm our workloads to cloud but there’s likely to be appetite from industry as demand for deep learning in fields such as automotive and aerospace ramp up.

Why the Data Center Liquid Cooling Space is Heating Up

Screen Shot 2017-10-20 at 13.59.42Latest column for Data Center Knowledge is on direct liquid cooling (DLC). I’ve been researching and writing about DLC for several years now but it finally feels like there is demand-side momentum building to match the fervour from suppliers. The pros and cons of the various flavours of DLC still stand but with support from all of the main server OEMs, adoption by some hyperscales, and even interest from colos, liquid cooling may be finally reaching a tipping point beyond traditional HPC.