Imagine my surprise when Pure announced that they will be retro-fitting a new technology into an array with software that wasn’t originally designed for it.
Pure’s marketing message has always been to deride any storage device that was not ‘Built from the ground up’. To claim that something is inferior if it has been retrofitted to support evolving technology.
There are so many ways that this is clearly inaccurate, but that’s for another post.
Now Pure is dealing with the reality of the market place. Technology changes, and a good company is always adapting, adding, and yes even retrofitting new ways to solve real world problems.
This is a Pure post and is likely to upset a few of the ‘Puritans’, but this is NOT an attack on their technology which I’m sure is good at what it does, or it’s people that clearly enjoy working there, I just want to clear up something so that we can take it for what it is, marketing spin and now just move on. That is their definition of a ‘Legacy Vendor’.
As is the norm in the Data Management business, every few years the media changes. We’ve had many types of disk drives over the last 40+ years with still more coming with the rise of SMR and other new formats. We’ve had many different types of SSD as well. From SLC, MLC, TLC and now QLC starting to make its entrance into the market.
At NetApp, we’ve adapted our platforms. We’ve re-written IO paths in the code to optimise them as the media has changed. We’ve demonstrated that, not only can we adapt very successfully, we can do so whilst delivering outstanding performance, functionality and endurance. And yes, we retrofitted our platforms for Flash.
Pure would have you believe that this is somehow a truly terrible thing and means that the technology can never be as good as an array that was….yes you guessed it…’Built from the ground up’.
Pure has announced they are introducing NVMe into their arrays, which is not a surprise from a technical point of view, it’s a logical progression. It’s something we at NetApp introduced into our latest generation of systems that we recently launched as well.
My surprise is Pure’s now inconsistent claim that maybe being ‘Built from the ground up’ isn’t necessarily the best way to support customers. NVMe isn’t a new type of media, it’s a communications and interface protocol which means significant work has to be done to adapt your storage software to be able to use it. Much like the work that we did to optimise and get the best performance when we introduced SSD into our arrays. And Pure has zero experience retrofitting anything.
So to my Puritanical friends, you created a definition of legacy to throw doubt against any technology that wasn’t originally designed to support something new. Well you have just become your own definition of a ‘Legacy Vendor’.
Personally I feel that associating ‘Legacy’ = ‘crappy’ is to dismiss the credibility gained from many years of delivering technology solutions and the massive wealth of experience that this enables you to build up. The challenge that all vendors face is to ensure that they use this experience to make positive steps forward into the future, to design new technologies, new capabilities, even new business models and on occasion make acquisitions of startups such as ours with SolidFire. This ensures that they keep pace with or exceed what the market demands. When you’re the first to deliver something new then of course you can ride the wave, you can claim ‘Built from the ground up’ and dismiss others as legacy in a somewhat derogatory way. But then time passes, your new thing is no longer a new thing and you start to add capabilities to it that it was never originally intended for. This is your about turn moment!
Embrace your legacy, you can no longer use this as a negative against others. Your new challenge is to show how you can continue to enhance your products and solutions to meet the new requirements that they were never really designed for. You now have a whole new challenge and opportunity ahead of you and I can absolutely guarantee that this is going to be difficult.
We should know. We’ve been doing this for a long time now.
Let the mayhem commence
Hard to cling to “built from the ground up” for a specific technology. However, if you want to contort yourself to say you really meant “built from the ground up to adopt new technology” well I guess you can claim you were built to be perpetually cutting edge. I’m pretty sure that’s NOT what I heard during the hyperventilating during the “flash changes everything” era. Admittedly, NetApp was on the opposite end of the spectrum of “flash is just another media” but that at least proved to be the more flexible position to take. NetApp really didn’t have to make any cultural compromises to ramp up our flash business (now growing at 4X our nearest competitor). Experience is a great teacher.
In their press release we can read that: “In anticipation of the now inevitable shift to NVMe, Pure Storage engineered FlashArray//M to be NVMe-ready from the beginning, starting more than three years ago. Every FlashArray//M ships with dual-ported and hot-pluggable NVMe NV-RAM devices, engineered by Pure – an industry first when released in 2015. Additionally, the FlashArray//M chassis is wired for both SAS and PCIe/NVMe in every flash module slot, which enables the use of SAS-connected flash modules today as well as a transition to NVMe in the future. ”
So, could we say that they anticipated that?
Or do you think that you cannot share SAS and NVMe in the same platform?
Firstly thanks for your comment, NVMe is a broad topic, one aspect of it is to ensure that you have the ‘wiring’ in place behind your controllers and have upgraded to a release of Linux that supports NVMe, also with a BIOS that enables it as a boot device, and Pure have done a nice job to upgrade their platform and software to be able to support this, I’m pretty sure that there will need to be optimisations in the IO path in order to unlock the additional performance. But the bigger question is now how do you unlock that performance at the front of the controller, I think the real promise of NVMe will only really be seen once you have the support and ability to connect your controllers to an NVMe Fabric.
So it’s a good start, but a lot more work to do to actually realise the benefits from it and if you add NVMe connectivity to the front of your systems then you are definitely retro-fitting the capability to something that wasn’t designed to do it, which was the main point I was trying to make in this post.
NetApp talking out both sides of its mount, once again.
And here’s SolidFire FlexPod which according to NetApp’s own marketing material was Built from the ground up.
You can’t have it both ways…
This feels like arguing that if I’m saying “cooking from scratch is a good thing” then I’m being hypocritical unless I raise my own cattle.
Let’s also take a look at the more detailed assertion about building array software. Are you really asserting that the difference between a 15,000 rpm disk and a MLC SSD is comparable to 12GB SAS and NVMe?
Speeds and feeds vs. Solutions. Do customers really care what technology is used to interconnect SSDs with the controllers? I think they should not. They should find a vendor with a problem solving solution and they trust that this vendor is absorbing new technologies in their systems as soon as it make sense from a price and solution point of view.
The question should be:
how do you manage your data? This includes, backup, DR, cloning, data movement across clouds etc.
Who cares about how the electrons moves? That’s why we buy from a vendor and are not building it by ourselves. A customer should care about the total solution and the future options to adapt.