Would you trust Siri or Alexa to manage your datacenter?

Screen Shot 2018-02-19 at 14.14.19

Ok. The headline is a little extreme but it has some basis in truth.

This week I spoke with Litbit co-founder JP Balajadia whose company is developing AI personas to help with the management of critical infrastructure including datacenters.

Specifically we spoke about the deal the AI-start-up has done with CBRE.

The facilities management specialist is licensing Litbit’s AI  ‘persona’ technology to help it improve the management services it provides to datacenter customers.

The deal is still at an early stage and we didn’t discuss too much in the way of specifics but it will be interested to keep tabs on how it all progresses.

In particular, I’d like to know how many of CBRE’s 600 to 800 datacenter customers will sign up to the initiative and what the data privacy and security implications might be.

The other big challenge is how, and from where, Litbit and CBRE are going to pull data into the system to train the AI persona which will be known as REMI.

We did touch on it during our conversation – it will be text based initially – but the specifics of what the user interface for the REMI persona will be like will also be interesting to see.

For the full article go to Data Center Knowledge.

I’ve also written before about the wider challenges of using AI in datacenter management. 

The Idea of Data Centers in Space Just Got a Little Less Crazy

Screen Shot 2018-02-09 at 14.36.20
SpaceX Starman and red Tesla in earth orbit

My latest column over at Data Center Knowledge is a timely riff on the potential for space-based datacenters off the back of the jaw-dropping SpaceX Heavy Falcon launch this week. 

The commercialization of space is nothing new, nor obviously is the use of satellites for telephony, internet connectivity, navigation, or broadcasting. However, the idea of a network of data centers orbiting the Earth – powered by the Sun and cooled by the icy vacuum – still seemed more science fiction that fact until very recently.

Elon Musk is not a man who seems overly concerned with orthodox thinking. This week, his company SpaceX fired yet another rocket – specifically Falcon Heavy, the most powerful rocket in operation today – right through the space exploration rulebook. To emphasize his point, the payload was a cherry-red Tesla roadster that is now headed down a trajectory that will (unlike originally planned) take it beyond the orbit distance of Mars.

There are already a few different space-based data and networking start-ups out there worth checking out.

For the full article check out my Critical Thinking column over at Data Center Knowledge. 

Bringing data science to facilities management

Screen Shot 2018-02-05 at 10.34.36This week’s Critical Thinking column for Data Center Knowledge is based on a recent interview with Michael Dongieux founder and chief executive of Fulcrum Collaborations.

Fulcrum has developed a cloud-based platform for facilities management called MCIM. It can be used to automate a lot of the day to day management tasks that were previously done using spreadsheets or manual check-lists.

The main benefit that MCIM can bring, according to Dongieux, is the insight it can give on the cost of maintaining specific pieces of equipment but also how that equipment performs not just in one site but across multiple data centers.

“When someone logs an incident report, they are able to associate every asset or assets that were involved in the incident and then say what the source of that failure was. That information is crowdsourced and clustered automatically. That enables us to correlate not only what the asset condition index, or ACI, score is of a particular piece of equipment, but we can also say for example that at 85 percent of their useful life, centrifugal chillers typically start to see an increasing occurrence of a specific kind of failure, ” said Dongieux. 

For the full article head over to Data Center Knowledge.

From Bitcoin to Gangnam Style. Time for a data center ‘social worth’ metric?

Screen Shot 2018-01-30 at 17.10.23My latest blog over at Verne Global looks at whether it might be time to introduce another KPI or metric into the data centre management lexicon: social worth.

I owe a debt to Professor Ian Bitterlin for his cutting analysis of the energy consumption of the Youtube sensation Gangnam Style a few years back which stuck with me.

The recent volatility around bitcoin has also stirred up similar concerns about profligate use of energy.

Bitcoin facilities, or hashing centers, might be capex light but they consume huge amounts of power and, if critics are to be believed, with at best questionable long term benefit.

Head to Verne Global for the full blog.

Looking behind and beyond Vertiv’s recent acquisitions

Screen Shot 2018-01-26 at 14.35.16

This week Vertiv, previously Emerson Network Power, made its second acquisition in as many weeks.

Even without going into the specifics, the deals are important in terms of proving that Vertiv’s new owner Platinum Equity is serious about investing in the equipment supplier’s future growth.

Drilling in deeper, the latest acquisition of PDU-specialist GEIST was important for a number of specific reasons:

  • There is increasing demand from hyperscales and large colos for integrated racks with power distribution, monitoring and other capabilities built in. The Geist purchase adds to Vertiv’s capabilities in this important and growing area.
  • Geist also has an innovative approach to manufacturing with production times cut down to less than a week for custom equipment. That fast turn around of custom kit is also important for large operators.
  • Geist also has existing customers including large hyperscales and colos which Vertiv has now got access to and should be able to sell additional products and services into.

All of these factors are important in the short term.

But it’s also interesting to think about how Vertiv will use, or enable its customers to use, data from PDUs and other equipment it has acquired and developed internally.

More on what that might mean and how some suppliers such as GE may have got it wrong over at my Critical Thinking column for Data Center Knowledge: There’s more to Vertiv’s Geist acquisition than PDUs and engineers.

How Flash is Enabling Other Disruptive Tech

Screen Shot 2018-01-20 at 09.01.48

My latest Critical Thinking column over at Data Center Knowledge looks at flash storage in the data center.

To understand more about how flash storage has gone from a relative outlier to an accepted and core part of the data center infrastructure stack, we spoke with Alex McMullan, CTO EMEA, of flash specialist Pure Storage.

As well as explaining how flash can help improve overall data center efficiency, he also discussed how it supports and enables other disruptive technologies, such as machine learning (ML).

McMullan estimates that up to 20 percent of Pure’s customer base are investing significantly into machine learning and deep learning right now, including what he says are some of the biggest AI projects in the world.

Head to Data Center Knowledge for the full interview.

The first recommendation from Google’s datacenter AI was: Switch off the datacenter

Screen Shot 2018-01-17 at 10.34.31

My latest Critical Thinking column over at Data Center Knowledge is part of the site’s focus on all things AI in the datacenter industry this month.

The hope is that AI-driven management software (likely cloud-based) will monitor and control IT and facilities infrastructure, as well as applications, seamlessly and holistically – potentially across multiple sites. Cooling, power, compute, workloads, storage, and networking will flex dynamically to achieve maximum efficiency, productivity, and availabilit

While it’s easy to get caught up in the exciting and disruptive potential of AI, it’s also important to reflect on the reality of how most data centers continue to be designed, built, and operated. The fact is that a lot of the processes – especially on the facilities side – are still firmly rooted in the mundane and manual.

And as Google nearly found to its cost, the answers and actions delivered by AI systems may not always be what was originally anticipated.

Just as Skynet in the film The Terminator took a dispassionate, logical view of preventing conflict, finding that mankind was the problem, Google’s algorithm reached a very simple and accurate conclusion about improving the efficiency of its sites:

The model’s first recommendation for achieving maximum energy conservation was to shut down the entire facility, which, strictly speaking, wasn’t inaccurate, but wasn’t particularly helpful either.

For more head over to DCK.

Datacenter downtime is bad but not nuclear silo explosion bad

Screen Shot 2018-01-12 at 15.32.00
Source: This American Life.

Writing about datacenters and tech I am always looking for parallels with other industries to try and contextualise some of the issues that emerge.

Managing datacenters is challenging but what about other types of critical infrastructure like airports, railways and power stations?

I think I have found another great example.

I just listened to a recent webcast from the always excellent This American Life. Titled, ‘Human Error in Volatile Situations’ it does pretty much what it says on the tin.

The first story in the episode is the most gripping and probably the most infamous. For anyone who’s had experience of managing complex facilities equipment, it’s a must listen.

“In 1980, deep in a nuclear missile silo in Arkansas, a simple human error nearly caused the destruction of a giant portion of the Midwest.”

A devastating explosion, and a near nuclear incident, was caused by human error – use of the wrong tool – but exacerbated by extremely poor decision making from above and emergency operating procedures that seemed comprehensive but didn’t extend to the unthinkable.

Check out the podcast at This American Life.

I’m planning to check out the book on which some of the podcast is based next – Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety – but I’m also conscious that where nuclear incident safety is concerned, ignorance is also bliss. 

Open Compute Project targets new markets including colocation

I was lucky enough to speak with Steve Helvie, VP of channel at the non-profit Open Compute Project (OCP) Foundation recently.

Helvie said OCP is targeting several key markets in 2018 as it looks to maintain its momentum and grow beyond hyper scalers. These include telcos, service providers (from SaaS to colocation), financial services (including blockchain), high-performance computing, healthcare, and government.

Regarding colocation operators, the group has released guidelines and a check-list to help with adoption of OCP equipment in colocation facilities. There are also plans for some kind of stamp or certification which has been discussed for over a year now.

However, the exact form the OCP-ready stamp will take is still being developed, according to Helvie. “We are likely not going to have another brand, but it will be a level of formal recognition. I want enterprises to be able to go into our marketplace and say, ‘Where can I find someone who is ready to host Open Compute?’”

Head to Data Center Knowledge for the full article.

OCP servers OCP summit 2017_1
Microsoft’s custom cloud servers, open sourced through the Open Compute Project, as seen at the OCP Summit 2017

Where and how to build your next datacenter for maximum energy and carbon efficiency

img_6700
Andrew at the Catalonia Institute for Energy Research’s facility in Tarragona Spain where the RenewIT tool was developed

I was lucky enough to be involved recently in an in-depth European Union research project called RenewIT. The project had a number of outputs but the main one was a web-based tool to enable different datacenter designs, and locations for those designs, to be compared across Europe in terms of energy efficiency and carbon emission reduction.

I just published an overview of the tool, which was a finalist in the recent Datacenter Dynamics awards, over at Verne Global’s site. The tool has some particular relevancy for the colocation and cloud services operator as it facilities are based in Iceland. Verne benefits from Iceland’s cheap and plentiful renewable energy and is encouraging more organisations to locate their workloads at its facilities.

Screen Shot 2017-11-10 at 16.51.41
The RenewIT tool enables locations across Europe to be compared in terms of cost and access to renewable energy

Head over to Verne’s website to access the full blog. The RenewIT tool also has its own dedicated site and there is a separate site with more background on the project and its other outputs.