AI, Containers, cloud computing, large storage infrastructures, IoT, HPC… and probably more. And to serve all of this, large Service Providers and Enterprises are building huge datacenters where everything is designed to maximize efficiency.

The work covers all the aspects around the data center facility, power and cooling, security and compute power density as well. A lot has been done but there is more that has been asked.

Failed attempts to do more (with less)

In the past many vendors tried to get more work done taking approaches that failed miserably. Do you remember Sun Sparc T1 processors for example? Launched in 2005, 72 Watts, 32 threads (4 threads per core), with a 1.4Ghz clock… but it was too ahead of its time. Most of the software was still single threaded and didn’t work well on this kind of CPU.

In the past we have also seen several attempts to push ARM CPUs in the datacenter, 32 bit processors first and 64 bit later. They all failed for the same reason that afflicted Sun’s CPU… plus the lack of optimized software in some cases.

But the number of cores continued to increase (now Intel can show up to 24 cores in a single CPU) and software followed the same trend with multithreading first and down to micro services now. Apps organized in single process containers are just perfect to run on this type of CPU.

Thank you Raspberry Pi (and others)

12994-01Now Linux on ARM and lot of other open source software (as well as a specific version of Windows 10!) are much more optimized to run on ARM CPUs than in the past.

Raspberry Pi, the super cheap computer (starting at $5 now) which was launched in 2012, opened a world of opportunities for hobbyists, students and all sorts of developer at all levels. Prototyping is much easier and inexpensive while the community ecosystem is growing exponentially. Raspberry Pi, and all its clones, are not designed for the datacenter of course… but it is also true that this small computer has inspired a lot of people and is at the base of some very cool projects, including HPC and Docker Swarm clusters!

The next step

ARM CPUs are particularly efficient when it comes to power consumption and now they are becoming more and more powerful. What’s more, these CPUs are usually designed with a SoC approach (System-on-a-Chip), which simply means that the CPU already contains a lot of other components needed to build a computer. In fact, multiple cores are often coupled with a GPU, network and storage controllers and so on.

It doesn’t mean more compute power per se, but actually more compute power and less power consumption per square centimeter. And this is what datacenter architects are craving for!

Back to the datacenter

kaleao-kmax-chassis-600Contrary to the past, there are now all the components ready to build a successful ARM-based datacenter ecosystem. CPUs don’t have the same compute performance per core compared to x86 CPUs, but it is also true that many applications and workloads work in a massive parallel fashion and container development will help to improve this trend furthermore. And, at the end of the day, for many workloads, compute power density is becoming more important than single core performance.

Other aspects include:

  • software which is much more optimized than in the past,
  • 64 bit ARM CPUs, which are much more mature now,
  • Automation and Orchestration tools, which are now ready to handle hundreds of thousands of nodes in a single infrastructure.

Now ARM CPUs are relegated to small appliances, or as a component of larger x86-based systems, but this could change pretty soon. I want to mention Kaleao here, a startup working on an interesting ARM-based HCI solution. This is just an example, but there are many others working on ARM-based solutions for the datacenter now.

Closing the circle

Server room.ARM has its potential in the datacenter, but we’ve been saying that for years now, and reality has always shown the contrary. This time around things could be different, the stars are all aligned and if it doesn’t start happening now I think it will be harder and harder in the future.

It’s also interesting to note that there is a lot of stirring regarding compute power when it comes to large scale datacenters. Google looking at its own specialized chips for AI and alternative CPUs and GPUs for HPC-like applications in the cloud, Quantum computing are just a few examples… ARM is one of the multiple options on the table to build next-gen datacenters.

My last note goes to Intel, which has demonstrated multiple times that they are capable of reacting and innovating when the market changes. Their CPUs are very powerful and the instruction set has improved generation after generation. Are power consumption and density at the core of current compute design? Definitely not and they don’t look like the best CPUs for future cloud applications… but who knows what’s up their sleeve!

If you are interested in these topics, I’ll be presenting at next TECHunplugged conference in Chicago on 27/10/16. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!