How important is object storage? Too important to fail!

A few days ago, a problem with AWS S3 caused major issues over the entire internet. We could say that it shouldn’t happen… but, you know, it does. And, in this particular case, even the “design for failure” approach to application design wasn’t enough… because most applications have always taken S3 as being infallible (and that particular AWS region hosts some critical services for Amazon).

Clouds bring rain

To be fair, it’s not only storage that fails. In the last two/three years, thanks also to the growing relevance of cloud infrastructures and cloud-based applications, we have been hearing about, or experiencing, a lot of small and large service disruptions. Network, Compute, security breaches, you name it! There are records of them failing all the time but, again, storage failures are the ones which have the worst impact.

Stateless vs Stateful

Most infrastructure components are fundamentally stateless. This is an oversimplification of course, but it conveys the idea about the difference between storage and the rest. In fact, if a network switch is lost it just needs to be rebooted or replaced to start working again. With storage… well… it’s much more complicated. A serious failure can result in data loss and, even if you have backups, the data which was lost since the last backup is gone. Furthermore you can move data through a network, or you can compute data… but without access to data you can do nothing.

Even more so, with primary storage going towards memory-class and in-memory, object storage is quickly becoming the most interesting system for data persistency (you could think of it as stateful storage).

Invisible, yet relevant

Object storage, AWS S3 in this case, is a fundamental component of modern IT infrastructures and, more in general of the whole internet now, this is why when it fails many other services fail with it. These kinds of storage infrastructures are relevant for everybody, both when they are on premises or on public clouds. They are used to consolidate huge amounts of data (no matter what huge means to you). In fact, an object storage cluster can start with a few tens of Terabytes now, while the largest installations are in the order of hundreds of Petabytes, with some having already exceeded the Exabyte barrier!

The backbone of modern storage infrastructures

Many organizations don’t actually realise they are already relying on object storage, while for others this situation is already clear, but migrating from a traditional model to a two tier storage strategy takes time, even for enthusiasts.

The most important aspects of all modern infrastructures are agility and flexibility. Because things change really fast and, if you want to consolidate everything but with a low-latency workload, you need to react quickly and add or change resources of your infrastructure without impacting performance, availability or resiliency. In fact, I’m fortunate to be working for one of the startups with the most flexible object storage technology, capable of building any sort of backend and with the flexibility to grow at any pace without the constraints imposed by traditional solutions. I joined them because I think this is one of the keys for succeeding.

Closing the circle

Object storage is everywhere and the number of organizations adopting it is incredible. In my daily job I talk to a lot of end users who have already adopted it or who are ready to. Their diversity amazes me too, in the same day it’s not uncommon to talk to customers with 20+ Petabyte installations as well as others with just 40TB in production. Different sizes and use cases maybe, but both of them really relevant for their business and their competitiveness.

Are you interested in knowing more about object storage? Join me on 6th of April (11AM CET, and 10AM PT) for a webinar where we will discuss its benefits, use cases and how to implement it to get the best ROI. Here the registration link.