The most common protocols to access object stores are HTTP/S-based, like S3, and the majority of these storage systems are scale-out. You can think of their front-end as a large pool of web servers. Adding a load balancer is a pretty common practice for avoiding the creation of bottlenecks and making the system more resilient and available. To do so there are two options:

  1. The infrastructure already has a load balancer in place
  2. A load balancer is deployed with the object store

Load balancers

Load balancers are quite a particular piece of technology, they are not 100% networking and at the same time they are not on the application side either. Covering somehow all the layers between 4 and 7 in the OSI stack, they are powerful infrastructure components that, over the years, have proven to be very useful for a lot of tasks. Not only mere load balancing then, for which you can have several ways to do it, but also high availability, encryption and other security aspects, protocol translation, data flow analysis, and so on.

At the same time, load balancers are very mature infrastructure components. Most of the products in the market flag the same checkboxes and even if some of them are faster than others, at the end of the day the differences are minimal. In addition, the free and open source products are very good too. HAProxy or NGNIX can do a great job in many cases and are broadly adopted by both enterprises and ISPs. 

The main differences between open source and commercial products, including those with an open source core, come from the appliance form factor, hardware acceleration and support – which is why commercial load balancers are still preferred by many enterprises.  Hardware acceleration usually means specific FPGA/ASIC cards to get better performance for encryption and other tasks that can easily clog a general purpose CPU. 

Back to the object stores

Vendors usually provide best practices for their object store and LB integration and some of them give end users more options. A low-cost open source option is always included, but not all organizations have the skills and the will to work without an UI and, above all, support. 

In some cases, the vendor provides a list of preferred certified solutions that have gone through a validation process and can count on specific optimizations that can improve infrastructure efficiency. This is the case, for example, of Kemp Technologies with Dell EMC ECS. Kemp, has developed a series of optimizations for ECS allowing the end user to get the most out of the object storage infrastructure while being assured end to end support thanks to the relationship between the two vendors. In the specific case of Kemp and ECS, the integration goes even further and the load balancer can be configured to optimize the traffic in the back end and fetch the necessary data quicker with fewer hops, for better performance and also less bandwidth consumption.

Here a video recorded last April at Tech Field Day Extra at Dell Technologies World, that shows the interaction between Kemp and Dell EMC EC. It gives a clear idea of the type of integration I’m talking about.

Closing the Circle

Load balancers are a necessary component for practically all object stores in production. It is a key element of the infrastructure and every end user should check this out carefully before acquiring the object store. As mentioned, most object stores propose several alternatives for load balancing, each having their pros and cons.

If you already have a load balancer in place it is highly likely that it will work just fine with the new object store, but if you want a specific, and optimized, solution to serve just object storage workloads at best then checking with the vendor for specific solutions and integrations, like the one from Kemp’s, would be worthwhile.