Hybrid has evolved. When we consider Hybrid in terms of storage it’s very often in the context of mixing different media types inside the same array, which is okay for many traditional workloads but, in a world of scale out and cloud scaling, does this still make sense? Some would say that to truly make use of Flash you need a dedicated architecture, a dedicated platform, a new silo of storage in your environment that doesn’t integrate with anything you already have and comes with a whole host of compromises in terms of functionality. But why should this be the case? after all Flash is just another media
The industry has spent years trying to bring technologies together, using advanced virtualisation to create a seamless delivery platform for compute resources across hardware with very different physical characteristics, with mobility to ensure that workloads can have Quality of Service and can be relocated across the underlying physical infrastructure to take advantage of different hardware characteristics. So why treat storage differently?
Of course there are many cases where simply installing a Hybrid array will make sense, but does it hit the extremes? The All Flash Array (AFA) vendors will tell you that the only way to get great performance is to use an AFA that was ‘built from the ground up’ for Flash, the Hybrid vendors will tell you they have an array that’s good enough for all workloads from high performance to high capacity and low cost. But why should you have to choose?
This is where I believe Hybrid has evolved, we need to think of it ‘outside the box’ (see what I did there) and consider it as a completely integrated environment that can deliver everything from Extreme Performance nodes with Flash only to low cost and High Capacity nodes. This is exactly what our Clustered Data ONTAP Operating System enables, think of it as the virtualisation layer that spans the controllers. Rather than just mixing media types inside the array (which we also do extremely well) why not have controllers that are optimised to deliver the extremes all brought together as a single cohesive storage platform?
For example, you want Extreme performance? then our AFF8080 (All Flash) system delivers this in abundance, but maybe you’re also looking for a high capacity / lower cost environment, well this can be delivered from our FAS8020 platform using Disk. Yes that’s right I’m suggesting two different platforms, but here’s the key part, what if these can exist as a Hybrid Cluster, where workloads can seamlessly move across them to match the performance required at the time, with quality of service, full application integration, one common and consistent set of API’s, one mechanism for data replication, one way to manage them and to monitor them.
Now what if you could also extend this scalability by including Hyperscalers such as Amazon Web Services? well you can! using NetApp Private Storage (NPS) or by installing our software based Cloud ONTAP into an AWS instance you can simply replicate any application, any virtual machine and any data from your storage environment straight into it for test or for development. Why not try it for yourself and see how simple it is to use.
A customer recently said to me “Your All Flash FAS is fast, incredibly fast, but being able to use Cloud ONTAP with Amazon Web Services has revolutionised the speed at which we can develop”. There’s a great post from our Cloud Czar Val Bercovici on this topic
This is the new Hybrid, no more silos, no more compromises, from Flash to Disk to Cloud and it’s just one aspect of our vision for a ‘Data Fabric‘
Have your registered for NetApp Insight yet? This technical conference provides customers and partners a chance to dive deep into the trends and technology shaping the future of storage and data management, starting with how to build a Data Fabric. Learn how to meet your data challenges from NetApp engineers and thousands of the brightest IT innovators.