Crossing the edge knowledge chasm

 

achieve-1822503_1920My new role at Vertiv has kept me busy – in an exciting way – over the last couple of months so I’ve been a bit remiss in keeping this blog up to date.

However, events last week deserve a special mention.

Vertiv held its first Innovation Summit in Zagreb, Croatia on 16th and 17th April. The event had a regional focus with more than 300 delegates from across Central and Southern Europe from Croatia to Israel.

The central theme of the event was edge compute.

If that term sends a small shudder down your spine – it shouldn’t do. Edge is an important trend but it has also attracted a lot of hype which has at times outpaced technical clarity on what edge actually means in practice.

As Vertiv EMEA president Giordano Albertazzi puts it succinctly in this LinkedIn post, there is something of an edge knowledge gap out there between how some suppliers are using the term and  how end-users understand ‘the edge’.

“…one of the questions raised during a round table with journalists was about whether edge is really a new phenomenon or simply a re-branding exercise for existing branch office computing or content distribution networks?

I can understand that view, but I think edge as it is being defined now is most definitely something new and on a different scale to anything that we have seen before.

True, we have infrastructure – such as our range of prefabricated modular data centres manufactured outside of Zagreb – which predate the current focus on edge. But we are also already seeing demand for those systems in a range of new edge deployments.

So while there is certainly a ‘legacy edge’, there will also be a large number of clearly distinct and disruptive use cases which we believe will proliferate well beyond any pre-existing notions of edge.”

Vertiv is doing its bit to bridge the knowledge gap with an ongoing research project to put more meat on the bones of edge including defining a series of edge uses cases and archetypes.

The full edge research report is available from the Vertiv website.  

 

From commentator to supporter

Screen Shot 2018-03-09 at 11.36.12

It’s great to be able to finally reveal that I have joined Vertiv as director of influencer marketing*, EMEA. 

It’s a bit of shift from being an analyst and journalist for the last 20 years but hopefully a rewarding change.

I’m really looking forward to supporting a team after years spent in the commentary box. 

And Vertiv is a great company to be working with right now as it moves into the next phase of its standalone journey: combining a start-up ethos with a great heritage and reputation. 

*If you want to know more about influencer marketing in business to business – this white paper is a good start. 

Norway Wants to Win Hyper-Scale Gold in the Data Center Olympics

IMG_0086
Bergen, Norway

My latest and last (see below) Critical Thinking column over at Data Center Knowledge is on Norway’s bid to build up its data center industry.

To that end the Norwegian government recently published Powered by Nature: Norway as a Data Center Nation, a report that details the country’s credentials as an ideal data center location.

Coincidentally, I recently returned from my second trip to Norway, where I witnessed first-hand some of the things it has to offer data center operators.

IMG_0286 2
A chilly boat trip on the fjords just outside Bergen

First the positives: One that springs immediately to mind is the temperature. After visiting in February, I can report that Norway is indeed a great place for free cooling (the only thing that is free in Norway it seems). The temperature where I was, in Bergen, barely rose above 5°C (40°F), and it’s one of the warmer parts of the country, thanks to the Gulf Stream.

Screen Shot 2017-12-01 at 18.06.27
Mock-up of Lefdal Mine Datacentre

Norway also does have some established data center operators already, such as Green Mountain, Digiplex, and Basefarm. One of the most recent projects is also the most interesting: the Lefdal Mine Datacentre (which we have written about before) has ambitions to be the largest facility in Europe, and, as the name suggests, it is completely underground.

So given all that, why hasn’t Norway been able to attract a hyper-scale operator to date? Head over to Data Center Knowledge to read the full column.

NOTE: As mentioned above, this was my last column for Data Center Knowledge as I’ve got an exciting new role which I will be discussing soon. 

Big thanks to Yevgeniy Sverdlik and the team for allowing me to contribute to the great editorial over at Data Center Knowledge. I look forward to continuing to work with them in my new role. 

 

Would you trust Siri or Alexa to manage your datacenter?

Screen Shot 2018-02-19 at 14.14.19

Ok. The headline is a little extreme but it has some basis in truth.

This week I spoke with Litbit co-founder JP Balajadia whose company is developing AI personas to help with the management of critical infrastructure including datacenters.

Specifically we spoke about the deal the AI-start-up has done with CBRE.

The facilities management specialist is licensing Litbit’s AI  ‘persona’ technology to help it improve the management services it provides to datacenter customers.

The deal is still at an early stage and we didn’t discuss too much in the way of specifics but it will be interested to keep tabs on how it all progresses.

In particular, I’d like to know how many of CBRE’s 600 to 800 datacenter customers will sign up to the initiative and what the data privacy and security implications might be.

The other big challenge is how, and from where, Litbit and CBRE are going to pull data into the system to train the AI persona which will be known as REMI.

We did touch on it during our conversation – it will be text based initially – but the specifics of what the user interface for the REMI persona will be like will also be interesting to see.

For the full article go to Data Center Knowledge.

I’ve also written before about the wider challenges of using AI in datacenter management. 

The Idea of Data Centers in Space Just Got a Little Less Crazy

Screen Shot 2018-02-09 at 14.36.20
SpaceX Starman and red Tesla in earth orbit

My latest column over at Data Center Knowledge is a timely riff on the potential for space-based datacenters off the back of the jaw-dropping SpaceX Heavy Falcon launch this week. 

The commercialization of space is nothing new, nor obviously is the use of satellites for telephony, internet connectivity, navigation, or broadcasting. However, the idea of a network of data centers orbiting the Earth – powered by the Sun and cooled by the icy vacuum – still seemed more science fiction that fact until very recently.

Elon Musk is not a man who seems overly concerned with orthodox thinking. This week, his company SpaceX fired yet another rocket – specifically Falcon Heavy, the most powerful rocket in operation today – right through the space exploration rulebook. To emphasize his point, the payload was a cherry-red Tesla roadster that is now headed down a trajectory that will (unlike originally planned) take it beyond the orbit distance of Mars.

There are already a few different space-based data and networking start-ups out there worth checking out.

For the full article check out my Critical Thinking column over at Data Center Knowledge. 

Bringing data science to facilities management

Screen Shot 2018-02-05 at 10.34.36This week’s Critical Thinking column for Data Center Knowledge is based on a recent interview with Michael Dongieux founder and chief executive of Fulcrum Collaborations.

Fulcrum has developed a cloud-based platform for facilities management called MCIM. It can be used to automate a lot of the day to day management tasks that were previously done using spreadsheets or manual check-lists.

The main benefit that MCIM can bring, according to Dongieux, is the insight it can give on the cost of maintaining specific pieces of equipment but also how that equipment performs not just in one site but across multiple data centers.

“When someone logs an incident report, they are able to associate every asset or assets that were involved in the incident and then say what the source of that failure was. That information is crowdsourced and clustered automatically. That enables us to correlate not only what the asset condition index, or ACI, score is of a particular piece of equipment, but we can also say for example that at 85 percent of their useful life, centrifugal chillers typically start to see an increasing occurrence of a specific kind of failure, ” said Dongieux. 

For the full article head over to Data Center Knowledge.

From Bitcoin to Gangnam Style. Time for a data center ‘social worth’ metric?

Screen Shot 2018-01-30 at 17.10.23My latest blog over at Verne Global looks at whether it might be time to introduce another KPI or metric into the data centre management lexicon: social worth.

I owe a debt to Professor Ian Bitterlin for his cutting analysis of the energy consumption of the Youtube sensation Gangnam Style a few years back which stuck with me.

The recent volatility around bitcoin has also stirred up similar concerns about profligate use of energy.

Bitcoin facilities, or hashing centers, might be capex light but they consume huge amounts of power and, if critics are to be believed, with at best questionable long term benefit.

Head to Verne Global for the full blog.

Looking behind and beyond Vertiv’s recent acquisitions

Screen Shot 2018-01-26 at 14.35.16

This week Vertiv, previously Emerson Network Power, made its second acquisition in as many weeks.

Even without going into the specifics, the deals are important in terms of proving that Vertiv’s new owner Platinum Equity is serious about investing in the equipment supplier’s future growth.

Drilling in deeper, the latest acquisition of PDU-specialist GEIST was important for a number of specific reasons:

  • There is increasing demand from hyperscales and large colos for integrated racks with power distribution, monitoring and other capabilities built in. The Geist purchase adds to Vertiv’s capabilities in this important and growing area.
  • Geist also has an innovative approach to manufacturing with production times cut down to less than a week for custom equipment. That fast turn around of custom kit is also important for large operators.
  • Geist also has existing customers including large hyperscales and colos which Vertiv has now got access to and should be able to sell additional products and services into.

All of these factors are important in the short term.

But it’s also interesting to think about how Vertiv will use, or enable its customers to use, data from PDUs and other equipment it has acquired and developed internally.

More on what that might mean and how some suppliers such as GE may have got it wrong over at my Critical Thinking column for Data Center Knowledge: There’s more to Vertiv’s Geist acquisition than PDUs and engineers.

How Flash is Enabling Other Disruptive Tech

Screen Shot 2018-01-20 at 09.01.48

My latest Critical Thinking column over at Data Center Knowledge looks at flash storage in the data center.

To understand more about how flash storage has gone from a relative outlier to an accepted and core part of the data center infrastructure stack, we spoke with Alex McMullan, CTO EMEA, of flash specialist Pure Storage.

As well as explaining how flash can help improve overall data center efficiency, he also discussed how it supports and enables other disruptive technologies, such as machine learning (ML).

McMullan estimates that up to 20 percent of Pure’s customer base are investing significantly into machine learning and deep learning right now, including what he says are some of the biggest AI projects in the world.

Head to Data Center Knowledge for the full interview.

The first recommendation from Google’s datacenter AI was: Switch off the datacenter

Screen Shot 2018-01-17 at 10.34.31

My latest Critical Thinking column over at Data Center Knowledge is part of the site’s focus on all things AI in the datacenter industry this month.

The hope is that AI-driven management software (likely cloud-based) will monitor and control IT and facilities infrastructure, as well as applications, seamlessly and holistically – potentially across multiple sites. Cooling, power, compute, workloads, storage, and networking will flex dynamically to achieve maximum efficiency, productivity, and availabilit

While it’s easy to get caught up in the exciting and disruptive potential of AI, it’s also important to reflect on the reality of how most data centers continue to be designed, built, and operated. The fact is that a lot of the processes – especially on the facilities side – are still firmly rooted in the mundane and manual.

And as Google nearly found to its cost, the answers and actions delivered by AI systems may not always be what was originally anticipated.

Just as Skynet in the film The Terminator took a dispassionate, logical view of preventing conflict, finding that mankind was the problem, Google’s algorithm reached a very simple and accurate conclusion about improving the efficiency of its sites:

The model’s first recommendation for achieving maximum energy conservation was to shut down the entire facility, which, strictly speaking, wasn’t inaccurate, but wasn’t particularly helpful either.

For more head over to DCK.