Last week I attended Storage Field Day 3, and I met some interesting companies that are looking at Flash to build hybrid solutions.
Looks like 100% flash products are still too expensive for the rest of us and hybrid is gaining more and more attention by the end users. The easiest, and probably most efficient, way to implement flash in the hybrid world is to use it for caching purposes. In practice, flash on the front-end acting as a big lung used to leverage a relatively slow back-end made of mechanical disks.
Long story short: a relatively small amount of SSD cache can boost performance up to the roof without draining your pockets but it’s important to understand how (and where) to use it!
During the above mentioned event, we saw many different caching approaches, different topologies and also different levels of integration with the upper software layers, all of them with pros and cons.
On the server side
Caching on the server side is the most radical approach, if compared to ordinary shared storage architectures. It gives the best in terms of performance. But, on the flip side there is a huge drawback due to the complexity of the software layer needed to maintain the cache coherency, availability and resiliency in clustered scenarios.
Some solutions brilliantly mask the complexity and they do it with a strong integration on the operating system or the hypervisor level. The most vivid example here was shown by PernixData’s FVP. The demo (a complete video here) shows a product that is completely transparent to the whole infrastructure while adding enterprise class resiliency and availability features. Also the speed is amazing.
My only concern about PernixData’s solution is the fact that it only supports VMware vSphere… (indeed, most of the end users don’t see this as a big problem)
Other players have chosen to support a wider range of operating systems, and SanDisk’s FlashSoft is the right example (here the videos of their session). This software solution supports more operating systems (Microsoft Windows, VMware, Linux) but at the cost of a minor integration. The real advantage for the end user resides on the ability to have a single, almost identical, software layer for all the operating systems. In fact, from the end user’s perspective it’s a good idea to have a single enetperise-wide caching platform to manage. In any case, it’s important a careful evaluation of limits and constraints before deploying it.
There also is a third way to obtain the maximum performance at the lowest price: avoid any form of cluster and enable application level data replication (e.g log shipping mechanism provided by almost all DB softwares). It isn’t exactly a good option in terms of availability but it has to be considered in all those cases when fast access to data and cost are more important than any other parameter.
On the array side
Hybrid arrays are gaining a lot of traction. Last week we met Starboard system (videos), NetApp (videos) and NexGen,now part of FusionIO (videos). All of them use flash to lower latencies and increase IOPS. Not only that, of course, SSD can also be used to enable (or improve) some smart features like real time deduplication.
Hybrid storage systems aren’t as fast as full-SSD systems but they are good enough to serve most of the workloads, especially in the SMB enterprises.
The value of hybrid arrays has also been proven by some news that I heard a couple of days ago from a Nimble storage representative: more that 30% of the revenues of this (very successful) hybrid storage startup come from VDI projects! That’s huge and it’s the proof that this kind of storage system allows medium sized companies to (also) deploy VDI without
optimized specialized infrastructures.
Caching Vs. Tiering
Tiering is a very good technology (Companies like Compellent made their fortune on it) but, is it surpassed by SSD caching?
Automated tiering was developed when the storage arrays were built around mechanical disks: performance, latency and size of the fastest disks where always comparable with the slowest ones.
Now Flash could be 50-100X the performance of the fastest hard disk drive while the size of flash layer could be smaller than a single disk… and I haven’t mentioned latency!
In a few words: data movement algorithms need to be revised and adapted to use the new media more efficiently. The problem is that the more you speed up the data movement between flash and disks the more it looks like a caching mechanism… but, I know, it’s a long story.
Flash into servers gives you the best performance but adds complexity to maintain high availability and resiliency. It also needs more knowledge to identify and design the right solution.
Flash into hybrid arrays is very easy and transparent to adopt but it’s not top of the notch in terms of performance.
The former one is clearly more indicated for large enterprises and high demanding environments while the latter fits at best in SMB environments.
Disclaimer: I was invited to this meeting by Gestalt IT and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or published by any other person than the Juku team.