Today Nutanix provides inline dedupe over RAM and Flash to get most out of your performance tier. To see how much space your saving go to the Analysis section in the Prism UI.
Add an Entity Chart for the cluster
I have previously have discussed Medusa and Curator as two of the components that allow the Nutanix Complete Cluster to scale before being hired at Nutanix. After a week and half of technical training I can see why I didn’t even attempt at trying to explain Stargate until now. In training the Stargate section took over two and half hours to nail now it’s inner workings.
Narayan Venkat, former VP of VMware Cloud Business is the newest person to come over to Violin Memory. Narayan joins a notable list including Garry Veale, HP’s former VP of EMEA’s StorageWorks Division and Jonathan Goldick, former CTO of OnStor. These new individuals plus the talent from Fusion IO that came when Donald Basile, CEO of Violin Memory took over in 2009 are forming a Mercenary Team of Flash Performance.
Narayan, as the New VP of Product Management mission at Violin will be to bring feature rich parity to the Hypervisor world. It’s obvious that vSphere is on the top of the list for Violin but when I asked about Hyper-V both Narayan and Matt Barletta, VP of Product Marketing where quick to answer with a yes. Both Matt and Narayan were very excited when they where talking about Violin’s grass roots in database performance and their plans to tackle Tier I applications that were left off sitting on physical hardware.
Toshiba was the inventor of NAND flash and has made a significant investment in Violin Memory. This gives Violin access to a supply chain and tools to make sure that their flash can work at scale. Adding additional size to your arrays to handle wear levelling will not work at this scale. It’s also worth noting Toshiba hasn’t invested in an American company in 8 years, I think that says a lot about what Violin is accomplishing.
Violin is leaving two empty X86 sockets on thier storage arrays that allow for features to be added to the arrays without affecting performance. Today you only get raw speed but snapshots, VAAI, replication will be coming down the pipe. It also brings opportunity to bring the application to the data. Lately all efforts have been bringing the IO to the compute, I like the change in direction. The empty sockets will also allow for dedupe to happen, not sure if it will be inline or not. All said it will be interesting to see what develops.
Violin isn’t the only one doing Flash today but strong leadership and finalical backing can go along way. If Violin can deliver on the hypervisor, they might have a change to unseat the old boys club on top of the storage stack. I look forward to seeing what Narayan can deliver at his new post. Narayan had VMware’s EUC vision all but memorised when we talked so I am hoping that Project Horizon and Octopus have a small home on some flash array somewhere.
Pure Storage’s main message is they what to replace traditional storage arrays with an all based flash array. Pure Storage is able to deliver relatively the same cost per GB as Traditional HDD based arrays by using:
• Commodity based MLC flash
• Inline Dedupe
• Inline Compression
• Thin provisioning
Pure Storage is able to deliver over 100,000 IOPS per controller pair. If you want to add more capacity you can add up to 4 trays of disks. The trays of disk are connected via 6 Gb/s SAS with everything being
dual pathed. If you need to add performance, you have to add controllers. The controllers are connected via 40 Gb/s Infiniband and you can connect up to 8 of them. When the controllers are connected they’re clustered and provide access across all ports across on all controllers to be used at any time. Each controller has 24 CPU cores, 48GB of working DRAM. Interesting the NVRAM is stored on each controller in a redundant pair. The controllers can go away and you would still be able to recover.
Can you turn Dedupe off?
Nope. I was thinking if you could turn dedupe off that you could maybe increase my performance. Some of other all-flash based arrays offer high IOPS on their arrays but they’re also not trying to replace my spinning hard drive. Keep in mind, their goal is to replace my 15K a & 10K drives in the datacenter, not the massive SATA drives. I tip my hat off to Pure Storage because dedupe has been the failing of more than one fully based SSD array vendor. They might not get the 200,000 IOPS that some of their competitors are tossing down on the RFP papers but they do have dedupe fully working.
Dedupe and compression are necessary for their architecture to work, it:
• Keeps more of the data in DRAM & NVRAM
• Helps with the longevity of the drives. Less data being written and moved around on the SSD drives.
• Less data flowing on the interconnects that otherwise could become the bottleneck.
Impact for VDI
Having everything on SSD makes the architecture design pretty easy. While Pure Storage is not on VMware HCL, they are close at dotting the i’s and crossing the t’s. When all said and done, the will also have support for vStorage APIs for Array Integration (VAAI) – Atomic Test & Set (ATS). I would still create separate volumes to tier my data to be future proof, but at least the operational impact will be non-existent if some else makes a mistake creating a pool.
Having everything on SSD also allows you set the Max on the View Composer without worry. Check this article out for more information.
I would think you would have to redo your network design to take full advantage. Streaming your ThinApp packages could now take advantage of the fast IO on both the repository and the desktops. The 2-3 Gb\s you gave on your blade interconnects for LAN access might be the bottleneck. I also see a huge upside when you go to use Project Horizon. Your application repository will defiantly not be the bottleneck.
It’s obvious but VDI was might to be a green technology. Lots of power and cooling costs to be saved in the datacenter.
Purity , the operating system beyond Pure Storage is what gives the array it’s secret sauce. It’s worth pointing out that it fixes disk alignment issues. This is particular interesting since all linked clones are misaligned. Good practice says to refresh or recompose often but at least this would fix the issue instead of applying a band-aid. I thought the misaligned linked clone issue was fixed with View 5 but I don’t see it in the release notes.
The bad news is they still don’t have replication so you will have to figure out how to get your golden images and user profile information to your DR site. Replication is on the near term road map but this also adds to my next point. Most people are already going to have lots and lots of traditional storage arrays with replication. If I am a customer with replication on my old arrays, why not buy one of the claimed faster SSD arrays that offer more speed and do the tiering myself? Customers will have to make a cost \ operational impact trade off.
It’s also fair to mention that Pure Storage is still not GA yet. This young company looks to have some smart people at the helm and it will be interesting to revist them in the next couple of months. I will try to follow them on their journey and report back when Pure Storage + VDI hits the news.
I will post thier video once the Tech Field Day crew has it ready to go.
Other thoughts around Pure Storage:
Please feel free to comment.
My travel, accommodations, and meals were paid for by the Tech Field Day sponsors. Often there is swag involved as well. Like me, all TFD delegates are independents. We tweet, write, and say whatever we wish at all times, including during the sessions.