The short answer is No… Well, at least not in 99% of the cases.
Last week at #TFDX, during VMworld, I met Violin Memory. A company which was a pioneer in Flash storage, but is now struggling because it has lost some of its credibility due to the lack of a decent feature set (I’m referring to data services, integration with hypervisors and OSes and so on). They are catching up on features (with the Concerto 7000 family)… But, by looking at their revenues… they’re still in dire straights.
Perhaps, this is more of a marketing problem than a real technical one, but now they have to maintain both hardware and software R&D… which adds to the cost, while decreasing flexibility and increasing the time of product development cycles.
For example, some Startups in this space have already released products based on TLC and 3D NAND, cutting prices and now competing at around $1/GB. Will they ever be able to do that?
Chris Mellor, in one of his recent articles, is wondering if proprietary/specialized design is better than commodity. I think that the answer, as often happens, is “it depends”. And I think it mostly depends on your target market.
Flash is fast, durable, efficient
Coming from traditional HDDs, Flash is way faster! A single enterprise grade 2,5″ SSD can sustain 60/70K IOPS with a very good latency. And now, thanks to NVMe, we will get even better results both in terms of IOPS and Latency.
In fact, the major problem we have today is the SAS backend, not because of SAS itself but because of the layers of controllers (and, yes, also partially due to a protocol which was not designed for Flash).
Capacity looks like a problem of the past too. 4TB drives are now available and their power consumption, still in the 7w range, is also improving overall power efficiency of storage systems.
And you know what? Flash is more durable than disk too! Failures are much more predictable, Flash drive controllers are much more efficient than in the past and array vendors know how to work around NAND memory limitations and constraints.
So from the end user standpoint you have: comparable prices, strong perfomance, better efficiency than what you had in the past! The net result is that end users can now buy Flash array at Disk array prices… 10 to 100 times faster arrays at the same price and with better features… do we really need more today?
Someone needs more
99% of the workload can be relieved with general purpose arrays. Although in some cases you are in that 1%!
There isn’t any commodity hardware in that space. Economically speaking, for vendors, it simply doesn’t make sense to produce denser, more efficient and faster hardware: they can’t sell enough of it to justify R&D and production costs.
On the other hand If money is not a problem and you want better performance and efficiency, the only solution is specialized hardware. Companies like EMC DSSD – and, once, Violin, target this markets (and they are in that 1%!): high speed financial trading, some Big data applications and very high end transactional DBs just give you results faster if you give them more in terms of IOPS and latency.
This is also true if you need the highest possible capacity and power efficiency. You can’t pack half a PB of Flash in 3 U with commodity hardware. But how many people need it today?
Closing the circle
Big vendors like Micron, SanDisk or Intel are already doing the dirty work for us… and they have already demonstrated the quality of their work many times!
If you fall in the 99% of the use cases (which you probably do!), commodity hardware is the way to go. Nowadays all the innovation happens in software… and you want software features because you want to make your life easier. It’s not just about Flash storage, it is happening all over the Datacenter. Right?
Otherwise, if you really are in that 1% niche, you have a lot of money and you can afford to buy unbelievably expensive (faster and more efficient) specialized hardware. Just because that’s the only way you can stay competitive.
DSSD has a clear positioning today and they target super-ultra-high-end applications… they need specialized hardware (and probably software) and since they are selling in that tier 0 (at the very top of the pyramid), cost is not the biggest issue at the moment. They are also lucky because they are part of a much larger organization that can help them find the few opportunities out there.
Back to Violin, they were serving tier 0 at the beginning but they weren’t part of a larger organization (and HP dropped the reselling contract before it became anything serious). Violin’s growth plans didn’t match the niche where they were selling to nor the feature set they needed to compete on the general purpose array market. Just a shortsighted strategy since the beginning after all: people don’t buy a Ferrari if they need to bring their 4 kids to school, and Ferrari will never become the next Toyota!
Will they ever recover from this situation? I don’t think so. The primary storage market is really tough and I can’t see Violin making a dent with the Concerto 7000 (especially because the IP behind the 7000 comes from FalconStor, so that the feature set is almost identical to what you can find from other vendors like X-IO for example… It means that backend hardware is less visible while the software becomes commodity, the worst situation you can put yourself in!)
One last note. Chris, in his article, mentioned that Pure Storage is probably working on proprietary flash modules… I don’t know what they are developing in their labs but I’m sure they are working on something. If they want to succeed and grow they need more products. Block storage is just not enough if you really want to “paint all datacenters Orange!”.
I’m convinced that they want to plump up their product line, and since they already have a commodity hardware array, I’m sure they are investigating solutions that need more scalability and efficiency to address different use cases and, maybe, it also means more specialized hardware…
If you want to know more about this topic, I’ll be presenting at next TECH.unplugged event in Amsterdam next 24/9. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!
Disclaimer: I was invited to this meeting by GestaltIT and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or edited by any other person than the Juku team.
Hi Enrico,
About Pure, if you have look at the evolution between FA and //m series, Pure has already invested the hardware design part. I’m not sure if they go at the very lower level but anyway, the changes they came up with are already impressive.
As far as I know, all components in //m series are standard. No Asics, no custom Flash modules, no special interconnection… And nothing changed in the way Pure accesses SDD.
I don’t how far you define standard components but NVRAM modules are custom designed (cf http://www.purestorage.com/blog/flasharraym-nvram/). The interconnect between controllers is also great piece of architecture to me.
I think most of the advantages are coming from adoption of NVMe and not the custom design (and you can easily find PCIe NVRAM products with very similar performance (http://www.marvell.com/storage/dragonfly/assets/Marvell_DragonFly_NVRAM-001_product_brief.pdf ). In fact, they always mention standard components/protocols in the article.
In any case, I’ll meet up with Pure next week and I’ll ask them more details about the NVRAM module and PCI interconnect (which is not news and present in controllers from other vendors already).
And this doesn’t mean that Pure didn’t do a great job with they new //m series 😉