A chaotic effect created by something seemingly insignificant, the phenomenon whereby a small change in one part of a complex system can have a large effect somewhere else < At our Insight event in Berlin last week, NetApp made a whole bunch of announcements in front of over 3,500 of our customers, partners and employees. Sitting in the press briefing room reviewing the announcements, we came to a rather innocuous one. The launch of the 3.84TB TLC SSD’s for our All Flash FAS (AFF). Of any announcement, a simple capacity increase for SSD has to be one of the most basic, right? I mean it happens like clockwork a couple of times a year. The group moved on and began discussing all of the other new things but I couldn’t get past this. There was something niggling in my head. I kept asking myself of the new “Only-Flash” Array vendors, which ones are using these drives? Pure? Nope, it’s taken them almost 12 months to go from 1 - 2TB. EMC XtremeIO? Not yet, only just announced support for 1.6TB SSD’s. After a number of talks with a number of colleagues I found out there’s a fundamental reason why. Their myopic view of “only-flash” is now becoming the inhibitor to something as fundamentally simple as adopting a new capacity drive. If you decide to create a single large pool of storage and apply your efficiency globally across this pool then you need to keep a very large amount of meta-data in memory. As you increase the SSD sizes (essentially doubling each time) you need to massively increase the amount of memory inside the storage controllers to deal with this. A good detailed description of the different technical approaches to this has been written by my colleague Dimitris here
For Pure this would appear to be proving a challenge. It means adopting new SSD’s is currently way behind the launch of the new capacities The industry is currently at 3.84TB and they are stuck on 2TB which they only just announced. And when 16TB SSDs become available for Enterprise Arrays, no amount of Global efficiency is going to make up for the fact you have to buy 8 x SSD’s instead of 1.
If you own Pure arrays or are considering them, then you should absolutely be asking for a very comprehensive and accurate road-map on their plans for this.
For XtremeIO it’s even worse. They suffer from the same issue as Pure and have only just announced support for 1.6TB SSDs. Again, when you look toward the future, you should definitely ask them for an accurate road-map for when new capacities will be adopted.
But, (and it’s a HUGE, Gob-smacking, Oh my word, kind of but), if you’ve bought a couple of EMC Xbricks today with 1.6TB drives and you decide to add more Xbricks in the future then they have to be identical. Yes, that’s right, identical. Even if they could introduce a slightly higher capacity disk in the near future, you cannot use it without starting a new cluster.
When I was discussing this with some colleagues I had them repeat it several times as I couldn’t quite believe what I was hearing.
Because of our AFF design choice NetApp can rapidly adapt. 3.84TB SSDs? You got it! A higher capacity next year? No problem.
We are now seeing that the narrow architectural scope from Pure and XtremeIO is rapidly becoming a very significant limitation.
I’m sure some of you reading this are probably thinking, “well he would say that he’s from NetApp’ right?” well yes I am so you know the source, and don’t get me wrong there is some very clever technology in these arrays, but this is one of those significant limitations that’s kind of glossed over in the flurry of the sale that you should absolutely know about up front and not find out 12 – 18 months after you’ve made your AFA purchase.
Talk to us to find out how an architectural vision that spans from flash to disk to cloud expands the incredible performance, efficiency, adaptability and huge range of enterprise features available in our AFF platform. Don’t let your next AFA purchase leave you exposed to the impending ‘butterfly effect’.