Schopenhauer was right, truth really does pass through three phases:
First it is ridiculed.
Second it is fiercely and violently opposed.
Third, it becomes self-evident.
One of the many great things about working in the IT industry, especially in storage is that we’re constantly having to evolve our technology to deal with data that’s growing faster than the capacity of the technology that we use to store it on, a need for performance that demands using new types of technology in order to achieve it, combined with the fact that companies also expect to reduce the amount they spend on it, not increase it.
I joined NetApp 9 years ago because I saw a disruptive technology that forced people to think differently, that always challenged the normal ways of doing things. In those early days it started by challenging the assumption that everyone had to have a SAN, we had NAS and we knew it was simpler, cheaper and often faster than the storage people had been using, but getting people to accept this was a challenge. Sometimes it just didn’t matter how many performance bake-off’s you did, or benchmarks you submitted, often you would come across people that just simply weren’t interested in the facts and point blank refused to consider any alternative. Our competitors also had an extremely vested interest in us not succeeding, most of them had very substantial SAN businesses to protect and most of them didn’t have a NAS solution. But succeed we did and it was the catalyst for every major storage vendor to also launch their own ‘NetApp Killer’ NAS solutions.
As we started to see that many of the Application vendors only wanted to talk ‘Block’ we realised that this was a big market that we needed to be part of. Now we could have bought a SAN vendor (there are always startups out there) but then we’d just have been another SAN vendor, maybe we’d have had one or two cool features but would it be enough to win? and think of how many of the incredible features that we’d built into ONTAP that we’d not be offering. So we decided again to challenge some assumptions, there was no definition of what a LUN actually is, so why couldn’t it be a container sat on top of our Write Anywhere File Layout (WAFL)? that way you get an abstraction layer, an ability to add in all sorts of capabilities such as thin-provisioning, deduplication, in fact the all of the incredible efficiencies that we’ve introduced over the last few years have been possible because of this decision. We coined the term ‘Unified Storage’, NAS and SAN on the same device, and the uptake was huge.
The noise from our competitors was louder though, Unified Storage was ridiculed, stories came out decrying the approach, stating that NetApp LUN’s weren’t real LUN’s (I’ve still no idea what this argument ever actually meant), that Unified Storage was niche and could only ever be used in specific environments, there really was violent opposition to this approach. But as it turns out, ONTAP adapted incredibly well to supporting Block and File, so much so that other vendors also quietly introduced filesystems under their LUNS to, it became well accepted that a layer of abstraction really is the only way to be able to introduce efficiency capabilities and more. The final nail in this pointless argument came ironically from the company that started it in the first place, it’s very hard to argue that treating LUN’s as files on an abstraction layer is wrong on one hand when on the other you are taking physical servers, their operating systems and applications packaging them up in a file and sitting it on top of guess what? a filesystem, yes VMware finally and very conclusively proved that an abstraction layer was good.
ONTAP has proved itself time and time again to be an incredibly flexible architecture, coming from a NAS background effectively means we’ve always been virtualised and contrary to what other vendors have said, this was not only a good thing, it has fundamentally enabled us to introduce many of the capabilities we have today, ultimately delivering the scale out architecture that is Clustered Data ONTAP (cDOT).
So to Flash, we’ve been shipping Hybrid Arrays for over 5 years now, with FlashCache at the front of the controllers and FlashPools (SSD combined with Disk) in our shelves, in those 5 years we’ve been constantly tuning and improving the way that we write to and read from Flash. However if you listen to some of the All Flash vendors out there they’d have you believe that we just kind of stuck some flash in and hoped it worked, which of course couldn’t be further from the truth. We’ve shipped 70 Petabytes of Flash to date and counting, and in that time we’ve learnt more about Flash than probably anyone else in the storage industry and we’ve optimised cDOT in the way that it uses it.
cDOT has yet again proved that it could adapt to the new requirements that Flash demanded and if you look at the benchmarks (To be published shortly, ask your SE for details) for our All Flash FAS systems then the performance and price is extremely competitive when you compare it to other vendors in the All Flash market, in fact in many cases faster than arrays apparently built for Flash. But, and it’s a HUGE BUT, you get ALL the features that you need and expect from an Enterprise Storage Array, all of the efficiencies, the protection, scalability and probably most importantly an array based on 100,000’s of thousands of installed units and a support organisation built up over the last 20 years.
We recognise that there are a smaller number of companies that need extreme performance and for that we have our extremely successful EF550, with over 450,000 IOP’s in a system engineered from over 750,000 installed units. For the future we have some really exciting capabilities being built into our FlashRay solution. But for most people, cDOT powered by Hybrid FAS or All Flash FAS will deliver more performance than you need, however I suspect our competitors will fiercely and violently oppose this.