In a very conservative world, like data storage used to be, we have been seeing a lot of revolutions lately! Flash for speed, huge disks fighting against tapes for capacity and long-term archiving and software to glue them all together creating new ways to access and manage data.
But it seems it’s not enough. In the next few weeks we will be hearing some interesting announcements about in-memory storage… DRAM based storage? does it make sense? Will it be another revolution or just another feature?

In-memory means (much) more speed

In-memory storage is nothing new. In-memory DBs are becoming more common now: SAP HANA is probably the most well known but others are joining the party: Oracle has its in-memory option and some startups (examples here and here) are developing brilliant solutions leveraging this technology.

In-memory means that data resides in the server memory instead of in a traditional storage systems. These kind of DBs are spedier not only because a faster access to data but also because the optimizations that are possibile by having data on memory instead of external disks.

Availability and resiliency are a problem

On the flip side DRAM is not a persistent media. Backend architectures are more complex and it’s harder to reach the same level of availability and resiliency when compared to traditional solutions. Put simply, the risk exists in trading performance for resiliency and availability.
In any case, an In-memory storage system needs to rely on traditional storage to save data permanently, preserve from data losses in case of fails and grant durability.

It’s not only about the media

20nm_class_DDR3_RDIMM-0In-memory storage, as it happens for DBs (and for flash), needs different access methods to data. In this case, to take the full advantage of DRAM, you would have to rethink all the legacy stuff like access protocols (i.e. SCSI), drivers, file systems, and so on. They are all designed to work in a different way.
Consequently, you need to have a new “latency conscious” scale-out design for the storage system and it makes this architecture particularly similar to what we are seeing on most VSAs (Virtual Storage Appliances).

In-memory storage is already here

There are some interesting examples of In-memory storage out there. Atlantis computing, with its VDI storage solutions, is probably the most advanced (and impressive) in this space but others are showing interesting stuff too. For example, Infinio accelerator, uses a small amount of RAM on each node of your VMware cluster to provide a high performance caching layer at a very little cost.
And I won’t talk about RAM disks that you can easily configure on many Unix and Linux operating systems.

Why it matters

In-memory storage won’t turn the storage world upside down but it could be a great feature to speed up your storage infrastructure, especially if it is already virtualized.
A small amount of RAM is much faster than the fastest flash and it can easily thought as the next tier 0 of your storage infrastructure. Unmatchable IOPS/$ and, if well organized, almost insignificant latency. By the way, DRAM is much more commoditized than flash and it is present in every server!

I recently wrote a blog about server-side caching and I suggest you give it a read. It could be helpful to get a full picture of you can expect from this kind of technology…

Any comment is warmly welcome