Last year, at Re:invent, Amazon AWS launched Outpost and finally validated the concept of hybrid-cloud. Not that it was really necessary, but still…
At the same time, what a couple of years ago was defined as cloud-first strategy with the idea of starting every new initiative on the cloud (usually with a single service provider), has been evolving into a multi-cloud strategy based on a broad spectrum of possibilities that range from deployments on public clouds and on-premises infrastructures.
Purchasing everything from a single service provider is very easy and solves a lot of issues but, in the end, you end up accepting a lock-in that doesn’t pay off in the long run. Last month I was with the IT director of a large manufacturing company in Italy who told me that, in the last couple of years, they enthusiastically embraced one of the major cloud providers for almost every critical project of the company… only to realize now that the IT budget is out of control, even if they take into account new initiatives like IoT projects for example. Their main goal for 2019 is to find a way to regain control… by repatriating some applications and building a multi-cloud strategy, and avoid putting all their eggs in a single basket.
There is multi-cloud and multi-cloud
My recommendation to them was not to merely select a different provider for every project, but to work on a solution that would allow to abstract applications and services from the infrastructure. Meaning that you can buy a service from a provider, but you can also decide to go for raw compute power and storage and build your own service instead. This service will be optimized for your needs, easy to replicate and migrate on different clouds.
Let’s make an example here. You can have access to a NoSQL database from your provider of choice or you can decide to build your NoSQL DB service starting from products which are available in the market. The first is easier to manage whereas the latter is more flexible and cheaper. Containers and Kubernetes can make it easier to deploy, manage and migrate from cloud to cloud.
Kubernetes is now available from all major providers in various forms. The core is the same and it is pretty easy to migrate from a platform to the other. And when you get into containers you’ll find loads of prepared images and others that you can prepare yourself for every need.
Multi-cloud storage
Storage, as always, is a little bit more complicated than compute. Data has gravity and, as such, is difficult to move… but there are a few tools that can come in handy when you plan for multi-cloud.
Block storage is the easiest to move. It is usually smaller in size and now there are several tools that can help you to protect and manage and migrate it… both at the application and infrastructure levels. There are plenty of solutions I can think of. In fact, almost every vendor now offers a virtual version of its storage appliances that can run on the cloud and other tools to facilitate the migration between clouds and on-premises infrastructures (think about Pure Storage or NetApp, just to name a couple). And it’s even easier at the application level…. if I go back to the NoSQL I was mentioning earlier you can find solutions like Rubrik DatosIO or Imanis Data that can help you with that.
Files and objects stores are way bigger and, if you do not plan in advance, it could get a bit complicated (but still feasible). First thing you have to do is to work with standard protocols and APIs. If you choose S3 API for your object storage needs, it is very easy to find a compatible storage system both on the cloud and for your on-premises infrastructures. At the same time, now there are many interesting products that allow you to access and move data transparently across several repositories (the list is getting longer by the day but, just to give you an idea, take a look at HammerSpace, Scality Zenko, RedHat Noobaa and SwiftStack 1Space). I recently wrote a report for Gigaom about this topic and you can find more here.
The same goes for other solutions. Why would you stay with a single cloud storage backend when you can have multiple ones, get the best out of them, maintain control over data and manage it on a single overlaying platform that hides complexity and optimizes data placement through policies? Take a look at what Cohesity is doing to get an idea of what I’m saying here.
The human factor of multi-cloud
Regaining control of your infrastructure is good from the budget perspective and for the freedom of choice it will give you in the long term. On the other hand, working more on the infrastructure side of things means that you have to invest more on people and their skills. I’d put this as an advantage but not everybody thinks this way.
In my personal opinion it is highly likely that a more skilled team will be able to make better choices, react quicker, and build optimized infrastructures which can give a positive impact to the competitiveness of the entire business but, on the other hand, if the organization is too small it is hard to find the right balance.
Closing the circle
Amazon AWS, Microsoft Azure and Google Cloud are building formidable ecosystems and you can decide that it is ok for you to stick with only one of them. Perhaps your cloud bill is not that high and you can afford it anyway.
You can also decide that multi-cloud means multiple cloud silos, but that is a very bad strategy.
Alternatively, there are several options out there to build your Cloud 2.0 infrastructure and maintain control over the entire stack and data. True, it’s not the easiest path and neither the least expensive at the beginning, but it is the one that will probably pay off the most in the long term and will increase the agility and level of competitiveness of your infrastructure. This March I will be co-hosting a GigaOm’s webinar on this topic and there is an interview I recorded not too long ago with Zachary Smith (CEO of Packet) about new ways to think about cloud infrastructures. It is worth a listen if you are interested in knowing more about a different approach to cloud and multi-cloud.
Well, a number of object-based storage providers offer a “multi-cloud” approach to data storage, but that approach is mostly about sending your data from your private or community object storage cluster to AWS, Google or Microsoft. Of course, you have to pay to bring it back from them and their egress fees are nothing sneeze at if you have lots of data to move. In fact, data egress fees are many times what it costs to actually store the data. Thankfully, you mentioned Zachary Smith and Packet near the end of your blog entry. Packet and its “federation” approach to cloud computing has exposed the unnecessary expense of paying to move your data when you have compute here and storage there. Packet believes this should be a zero rate proposition for the customer. Time for a smarter cloud computing environment based on federation not based on the big three and their way of doing things. Thanks for posting the link to your interview with Zachary Smith. Packet is doing something that needs to be done.
Hi Tim,
Thanks for chiming in!
yes, I really hope that Packet and its partners will succeed with this no-transfer-fee strategy.
It’s good to see that finally people realise that Cloud isn’t than nice fluffy thing they’ve been sold.
Some workloads optimised to be hosted by third parties may be OK but there is a large range of predictable workloads that do not make sense on somebody else’s computers.
I’ve personally settled with OnApp which is implemented by CSPs but also on-premises to provide Private, Hybrid and Multi-Cloud configurations with costs that can be a lot lower than offerings by the usual 3 big providers.