[Disclaimer: before reading this post, just a reminder that I recently wrote a white paper for NetApp and they’ll be sponsoring the TECHunplugged conference in Austin on Feb 2, 2016. The following content has not been reviewed, approved or edited by anyone else other than the Juku team.]
NetApp, the company usually known for its FAS appliances, is putting a lot of effort into making its Object Storage platform, StorageGRID, to become more competitive. And It’s coming around to what I’ve been saying for a long time now: end users need “Flash & Trash” or, more professionally, a two tier storage strategy.
In the past, I’ve been very skeptical about NetApp product strategy with ONTAP everywhere and the unified storage thing. It worked for a while (it made NetApp’s success indeed), but it’s no longer enough to cover all end user needs.
I also have to say that I’ve praised many of their acquisitions, especially Bycast. They all seemed an attempt to enlarge the product portfolio but the integration process itself, for the most part, did not lead to any good and the results were quite disappointing. As in the case of Spinnaker, which has taken too long to become what it is today (too strong ONTAP culture?).
Fortunately, it seems things are quickly changing and they are finally embracing a different, more open-minded, approach.
StorageGRID 10.2
Bycast StorageGRID was one of those acquisitions. I blamed NetApp for not pushing it hard enough from the beginning, but now they’re getting into gear and the product has become more competitive, with a list of features that can make object storage easier to adopt and deploy in any type of IT organization.
I recently wrote about one of the many use cases of object storage – Private cloud storage deployments – but what is really decisive today is not the single use case, which could be considered the starting point, but rather giving the end user options! A broad ecosystem of solutions, letting end users pick up what they need now while leaving the door open to the future.
NetApp, and the StorageGRID team, are doing exactly that, building an ecosystem that pivots around an object storage core.
From my POV, the most interesting feature added to this software release is the NAS bridge option (and it is interesting to note that 10.2 comes just 6 months after the 10.1! confirming a very fast 6-month release cycle), adding to other crucial improvements like global erasure coding, AD compatibility, S3-tiering, improved multi-tenancy and S3/Swift APIs.
The implementation of this NAS Bridge is quite interesting because it is deployed as a VM. This also means that, at the moment, primary focus is not performance but:
• Data ingestion: all files written via NFS/SMB are also accessible via APIs. Giving developers the ability to migrate storage without changing applications but readying data to be accessed with APIs in the future.
• Distributed NAS: NAS virtual appliances, thanks to a local cache mechanism, can be deployed remotely (ROBO sites) and act as traditional Filers while eliminating all local backup procedures and management.
Last but not least, the NAS bridge is now part of StorageGRID and it comes for free, meaning also 100% freedom in architecture design for end users!
This solution goes hand in hand with AltaVault (cloud-based VTL) and makes the product family even more compelling for large IT organizations and service providers.
Closing the circle
There are many takeaways from this story:
• Object storage rocks! All vendors are investing in it and end users of any size are looking at it as a potential solution for their infrastructures.
• NetApp is finally exiting from the ONTAP-everywhere loop (or, at least, this is what I get from the effort they are putting in StorageGRID).
• As a consequence, StorageGRID is improving release after release by adding features, integrations and options to its ecosystem… and when new options come for free it’s always good news!
• In my opinion, NetApp is also targeting the sweet spot for new object storage deployments. The top of the high-end market is already in the hands of a few vendors (like Scality, HGST, DDN or, now-IBM, Cleversafe), while now there are many projects starting in a range from a few hundred Terabytes to few Petabytes where an appliance-based approach, cloud tiering, a robust ecosystem and integrations make the difference. The latter is a much larger market segment which includes small/medium ISPs and mid/large enterprises as well. (In this space NetApp has cleverly joined vendors like Caringo, Cloudian and HDS among others).
And…. I’d like to close with some wishful thinking. I know, it’s probably asking too much here, but I’d like to see more integration between ONTAP and StorageGRID. Features like “SnapVault for StorageGRID”, for example, would make a lot of sense for NetApp end users…
“Wishful thinking”, indeed. ;). Netapp’s Data Fabric push would imply that your SV to SG notion is exactly where they are headed, as they have totally rewritten their SnapMirror engine for just this purpose. Great article. (Except the Cloud-based VTL comment… AltaVault is BR/Archive to cloud with rapid ingest and local restore from cache, but uses no tape nomenclatures or logical tape constructs…it simply exposes NAS shares…oh and now accepts SnapVault from OnTap. )
Hi Glenn,
Thank you for commenting,
I missed the lateast updates about Data Fabric and SnapVault integration (I didn’t have the chance to attend NetApp Insight). But I’m glad NetApp is doing that!
I also have to say that when NetApp briefed me on AltaValult a few months back, I’m sure they used backup as one of the primary use cases and various slides reported CommVault and other backup softwares. I don’t know if the message is changed but I’d be happy about that. knowing the potential of solutions like AltaVault it’s the right thing to do.
Best,
E
Don’t get me wrong, you’re spot on. VTL though really denotes (pardon the obvious statement here) exposing something that, from the server side, looks and acts like tape media, but in fact is a disk layer pretending to be tape, with all the benefits/drawbacks that come with that. AltaVault doesn’t do that, it simply exposes a CIFS or NFS share (and now an LRSE-based SnapVault/SnapMirror target), for backup solutions to point to for their direct-to-disk needs. Commvault hasn’t used VTL ever, it’s always only ever needed a share or export to hit for its disk-based backup. Then again, CV also has its built-in dedupe and cloud-copy tech. Different users will need different combinations of these target technologies depending on what their current and future infrastructures will look like.
Anyway, pretty sure that you used “VTL” as a catch-all for D2D, which I see a lot, and that’s fine because I know that YOU know what you meant. 🙂
In any case I’m very excited with the new StorageGrid stuff…it can be a great private cloud back-end for AltaVault as well!