>How many times have you heard this statement? Tape is dead! Mainframe is dead! So on and so forth… it turned out not to be true, most of the times it was just a way to say that a newer technology was seeing a strong adoption, so strong as to eclipse the older one in the eyes of the masses. But, in the case of FCoE, it is slightly different.
No one wanted it!
Well, no one except CISCO.
FCoE is a standard to encapsulate FC frames into Ethernet networks. A lossless protocol encapsulated in a best-effort network. Why? Technically speaking it makes no sense!
But FCoE has nothing to do with technical reasons. At that time the FC market was in Brocade’s hands while iSCSI was not considered enterprise-grade. Cisco was not as good as Brocade on FC but it has a lot of knowledge in Ethernet and IP. So it pushed very hard, at every level, and promoted FCoE.
A protocol that has a lot of implications but, above all, specialized adapters (CNAs) and specialized high-end Ethernet switches. Long story short, it could have meant a lot of revenues and control all over the datacenter networks! But it didn’t work as expected.
Very few adopted it!
The list of problems in adopting FCoE was very long:
– It’s complicated
– It mixes FC and Ethernet knowledge and teams (storage and networking people don’t mingle!)
– High investment with unclear ROIs
– The standard was not all that standard after all, with different vendors competing to get their specifications ratified.
– Most installations ended up with TOR switches (and FC from there!)
– FCoE doesn’t have flexibility of iSCSI (or NFS)
– Storage vendor (and customers) not really interested in adding another storage protocol to support.
– …
In the meantime iSCSI inherited most of the good stuff from FCoE (like DCB for example) and it has slowly become much more appreciated by end users and vendors.
The standardization process of FCoE started in 2007. Now, after 8 years, most of the vendors support only iSCSI and/or FC whilw FCoE is no longer to be seen in any future roadmap!
It has been of help!
Thanks to FCoE, Ethernet storage has grown tremendously. A few years back, it was considered only for secondary needs. Mostly because it was associated with mid-range or low end iSCSI storage and often poorly implemented on the networking side.
Now, it is considered as first citizen and some vendors (like SolidFire for example) leverage it to build huge infrastructures. Nonetheless, all VSAs and hyperconverged products are all based on Ethernet communication protocols!
Closing the circle
Would you buy FCoE today? Cisco partners don’t even use it: EMC maybe, but most NetApp FlexPod installations I’ve spotted in the field primarily use NFS, and others like Pure or Nimble don’t support it at all!
FCoE failed miserably… and CISCO lost a chance to control the whole datacenter (it can’t always be a success!).
From this point of view, today, CISCO is the market leader for datacenter blades and networking. But they have a very poor standing where storage is concerned! Despite the fact that they aren’t doing that bad with FC switches and directors.
It’s time for them to look around, forget FCoE (and other false steps made with Invicta ?!), and start building a serious storage portfolio… especially now that they are no longer tied to EMC.
Perhaps there’s another way to look at FCoE, not as a product but as a catalyst for business change.
FCoE caused every sales discussion, worldwide, for networked block storage (server, SAN, storage) from 2007 to 2013 to become a discussion where just bringing Fibre Channel, or just bringing Ethernet, wasn’t good enough. This changed the sales dynamic, worldwide, to favor not just Cisco products but Cisco’s direct sales force and resellers.
FCoE also redirected the entire discretionary investment (and then some) of the Fibre Channel industry (server HBAs, SAN switches, disk arrays and other storage devices) for that same period. In some cases, companies which previously specialized in either Ethernet or Fibre Channel were combined by very disruptive M&A in order to have all the skills required to succeed building, selling, and servicing FCoE products.
In the end, FCoE turned out to be a very cost effective edge (last hop) (Access layer) for Fibre Channel networks. It was also the catalyst for my career shifting from Storage to Networking. In those two ways, FCoE was a big success!
@FStevenChalmers
(speaking for self, not for employer, which happens to be HP)
Steve,
thank you for commenting.
I agree that it has enabled consolidation on Ethernet at all layers, but you are also confirming that most installations ended at top of the rack.
I think Object Storage (accessing file like objects in a flat address space using http: over TCP/IP over Ethernet), as well as Microsoft’s SMB Direct (cifs/RDMA) and related server technologies, have done far more to mainstream storage over Ethernet than FCoE ever did.
That’s not to understate the role of iSCSI a decade ago after Microsoft released its software initiator (iSCSI driver for Windows that did not require specialized iSCSI offload hardware in the NIC).
NVMe over Fabric, where the fabric is Ethernet with RDMA, looks intriguing as a next generation opportunity…but the storage business is brutal with customers choosing to adopt very few storage connection/networking technologies.
@FStevenChalmers
(again, work for HP, speaking only for self)
Does that mean FCoE has joined FC as being declared dead?
Granted they are both still alive being deployed in different environments particular if you look past those vendors mentioned above, thus one is no more dead than the other, otoh, perhaps they both are really finally dead for real, or at least for now ;)…
The storage business is very different, operates on a very different time scale than most of the technology industry.
It took a decade from the time we shipped the first Fibre Channel product until the time Fibre Channel SANs were universally installed in nontrivial enterprise data centers worldwide. Even if all new development for Fibre Channel stopped tomorrow, or if something better for shared block storage on a network came along, it would take two decades for the use of Fibre Channel to fade away. It’s simply too disruptive, and too risky, to an enterprise data center running hundreds or thousands of applications to one day suddenly decide to switch to a different storage ecosystem.
That having been said, Fibre Channel was not the winner of two very significant design-ins in the last decade:
First, the internet data centers all chose architectures different from Fibre Channel, generally installing disk drives in individual servers, running software with the disk drives, and having that software communicate in higher level semantics (not block storage) over Ethernet. For example, Amazon primarily uses object storage.
Second, Microsoft similarly chose to pull individual disks back into its servers, (to oversimplify) using RDMA assisted file access to achieve block storage performance in the simpler to manage file paradigm.
It will be interesting to see what happens over the next few decades, given that the host software stack used to access Fibre Channel (or for that matter iSCSI) takes longer to execute than the I/O does on an optimally implemented SSD. This is where NVMe over Fabric (see the SNIA webcast from late 2014, or Intel’s demo at IDF a few months earlier) becomes a contender.
Thks for Dejavu moment, remember the old HP 1/4 speed FC e.g. HP-FL drives as well as those from others including sun, not to mention the SSA camp etc… Ah, that was an interesting time back in the pre first full speed FC days of around 95-96ish before 1Gb FC had its feet stable with FC-AL, not to mention the future FC-SW….
Btw, assume you realize that not all of AWS is object storage based, granted S3/Glacier is a big part, however there are also EBS (block), now EFS, not to mention all of the RDS services as well as the in/on instance storage e.g. think of as cloud DAS ;). Also keep in mind that object storage is usually deployed ontop of a filesystem which is ontop of some form of block storage, unless you are doing some very new with one of the stacks that are using the STX Kinetic stack and Kinetic drivers which while they exist are rather rare…
Wow, that’s been a while, deja vu indeed. Brings back memories of my colleagues in Canada who developed the 1/4-speed Fibre Channel switch. Interesting times.
Yes, Amazon now offers block and file in addition to object storage, and that is as it should be.
The point of object storage is to be simple and cheap, both from the server-using-it perspective (uses the http over TCP/IP over Ethernet stack) and from the storage system perspective (leave a lot of the traditional complexities of storage systems behind, so that commodity hardware can be used instead of the relatively low volume, purpose built hardware traditional storage systems use). Yes, it is possible to put an object interface over a traditional storage system, and among Enterprise customers there are frequently situations where that is the right choice. Like NetApps supporting block over file for the last few decades.
ceph is my poster child for object storage, and last I heard ceph used object as its base, and implemented both block and file over object, not the other way around.
@FStevenChalmers
(speaking for myself only, happen to work at HP)
@FSteven actually Ceph can present objects and like swift, S3 and others stores those objects inside of files which are stored ontop of a filesystem such as xfs etc that sit on top of block storage… With object it is just another abstraction and access layer that increases scale-out beyond traditional filesystems which in turn increases scale-out beyond traditional block. You can see some more stuff about Ceph as well as other cloud and object storage at http://www.objectstoragecenter.com
Btw, EBS and RDS are not new at AWS, they have been around for some time ;)…
You are absolutely right that Amazon EBS has been around a long time. I think EBS is native block storage, not block-over-anything. The storage-systems-internals engineer in me is curious, but that’s for another day.
Swift and S3 are interfaces, not internal designs. (Well, Amazon’s S3 is a real implementation, but for the rest of the industry it’s an interface to be compatible with. Such is the storage business.) A cost centric implementation behind that interface would place objects directly on disk drives in disk centric white box computers, and a directory of objects in a server which could be either inline or off to the side. While elements of xfs could be leveraged into the directory service, a cost centric implementation could not afford the additional layers of overhead and mapping layering object over file over block.
The basics of ceph are discussed in its Wikipedia article at
http://en.wikipedia.org/wiki/Ceph_(software)
The implementation of block storage over object in Ceph is described at (see “how it works”)
http://ceph.com/ceph-storage/block-storage/
The implementation of file storage over object in Ceph is described at (see “how it works”)
http://ceph.com/ceph-storage/file-system/
-steve
@FStevenChalmers
(speaks only for self, happens to work at HP)
@FSteve you are correct that “S3” is one of the object access methods of the Simple Storage Service also known as S3, likewise OpenStack Swift has a similar access for its object access aka “Swift” that is similar to “S3”. Needless to say there are several other object access methods besides just S3, swift etc…
As for object and how they get stored on their back-end storage nodes aka object storage devices (OSDs) aka the white box or commodity servers, unless the specific software (e.g. ceph, openstack, riak, lustre, etc…) has implemented the Seagate Kinetic (now put into open source) drivers and adapted their lower-level I/O to using that along with the associated Kinetic drives, the uderlying storage nodes are still using block storage which generally has a lightweight filesystem e.g. ext3/ext4/xfs perhaps btrfs layered atop the block (usually JBOD however could also be raid).
All of that however gets abstracted to the upper layers and users. The storage management software for the object system simply then maps the object data into one or more files with the OSD and associated software keeping track of where the objects are, if over a certain size, what additional files the object may be in and so forth. There are lots of good tutorials and presos on the internals of openstack swift, ceph, google and even aws among others… Some of those links can be found at http://www.objectstoragecenter.com that in turn can lead you to some other related material, sites and info…
As for ceph, iirc there is or was a project to port the back-end over to use kinetic however not sure where that is currently at. Otherwise, last time I used it and looked in depth, there was lightweight filesystem under ceph that sat ontop of the block storage (you can use jbod or hw/sw raid, its pretty flexible.
Here is some more on ceph including some architecture stuff…
Podcast I did with Sage Weil (creator of Ceph)
http://storageioblog.com/ceph-day-in-amsterdam-and-stage-weil-on-object-storage/
Some stuff pertaining to Ceph architecture and other related things and links:
http://storageioblog.com/ceph-day-amsterdam-2012-object-and-cloud-storage/
As for your comment about implementation of file storage over object, its other way around… As for putting a filesystem (full or fuse) in front of S3 and others, actually pretty easy, I have s3fs presenting some of my S3 buckets and sub-folders as a regular mount point to my ubuntu servers and can cp or other things just like any other fs. Many other tools as well…
As for AWS, here is a primer on S3, EBS and some other things that may shed some additional light…
http://storageioblog.com/cloud-conversations-aws-ebs-glacier-and-s3-overview-part-i/
Also, since you have an interest in objects etc, check this out… While there is some AWS stuff, there is also stuff that applies to GCS among others…
http://storageioblog.com/s3motion-buckets-containers-objects-aws-s3-cloud-emccode/
Cheers gs
Ah, I need to be more careful choosing words. By “block” I meant the innards of a traditional disk array, and by “file” I meant either the innards of a NetApps box or a cluster file system (Lustre etc) in which category I tend to put xfs (incorrectly).
Yes, disk drives are linear arrays of blocks, and yes the low level OSD can use whatever tools are convenient (and simple, and lightweight) to map http: requests to locations on its internal disks.
Was simply trying to exclude object storage systems built as (and with the overhead and costs of) extensions of traditional disk arrays and/or file servers.
Thank you for the pointers! We live in an interesting world, and the hyperconverged systems which are emerging, first at data center scale (a la Facebook or Google), and as we speak at smaller scales, dissolve traditional silo walls and product boundaries in “interesting” ways.
-steve
@FStevenChalmers
@FSteven no worries, indeed we have been living in interesting times for server storage I/O hardware and software for a few decades now with more to come… 😉
FCoE can be seen as a success since it is what is used inside UCS domain.
If you have UCS Fabric Interconnect, you are using FCoE even if you don’t know it.