Today Nutanix provides inline dedupe over RAM and Flash to get most out of your performance tier. To see how much space your saving go to the Analysis section in the Prism UI.
Add an Entity Chart for the cluster
Today Nutanix with the help our brilliant Engineering team released Dedupe for RAM and Flash and a future release will allow dedupe for the capacity tier. This new feature is apart of the 3.5 code release and is available to all Nutanix customers. The release has over 75 features including, a new REST-API, a New UI and a Site Recovery Adapter for VMware’s SRM.
Dedupe will allow for even lower latency’s, lower TCO for Flash and keep Nutanix in the lead on Scale Out Technologies. I want to talk about the importance of this feature in terms of the history of Nutanix, and the impact on the future of the Data Center.
Pure Storage’s main message is they what to replace traditional storage arrays with an all based flash array. Pure Storage is able to deliver relatively the same cost per GB as Traditional HDD based arrays by using:
• Commodity based MLC flash
• Inline Dedupe
• Inline Compression
• Thin provisioning
Pure Storage is able to deliver over 100,000 IOPS per controller pair. If you want to add more capacity you can add up to 4 trays of disks. The trays of disk are connected via 6 Gb/s SAS with everything being
dual pathed. If you need to add performance, you have to add controllers. The controllers are connected via 40 Gb/s Infiniband and you can connect up to 8 of them. When the controllers are connected they’re clustered and provide access across all ports across on all controllers to be used at any time. Each controller has 24 CPU cores, 48GB of working DRAM. Interesting the NVRAM is stored on each controller in a redundant pair. The controllers can go away and you would still be able to recover.
Can you turn Dedupe off?
Nope. I was thinking if you could turn dedupe off that you could maybe increase my performance. Some of other all-flash based arrays offer high IOPS on their arrays but they’re also not trying to replace my spinning hard drive. Keep in mind, their goal is to replace my 15K a & 10K drives in the datacenter, not the massive SATA drives. I tip my hat off to Pure Storage because dedupe has been the failing of more than one fully based SSD array vendor. They might not get the 200,000 IOPS that some of their competitors are tossing down on the RFP papers but they do have dedupe fully working.
Dedupe and compression are necessary for their architecture to work, it:
• Keeps more of the data in DRAM & NVRAM
• Helps with the longevity of the drives. Less data being written and moved around on the SSD drives.
• Less data flowing on the interconnects that otherwise could become the bottleneck.
Impact for VDI
Having everything on SSD makes the architecture design pretty easy. While Pure Storage is not on VMware HCL, they are close at dotting the i’s and crossing the t’s. When all said and done, the will also have support for vStorage APIs for Array Integration (VAAI) – Atomic Test & Set (ATS). I would still create separate volumes to tier my data to be future proof, but at least the operational impact will be non-existent if some else makes a mistake creating a pool.
Having everything on SSD also allows you set the Max on the View Composer without worry. Check this article out for more information.
I would think you would have to redo your network design to take full advantage. Streaming your ThinApp packages could now take advantage of the fast IO on both the repository and the desktops. The 2-3 Gb\s you gave on your blade interconnects for LAN access might be the bottleneck. I also see a huge upside when you go to use Project Horizon. Your application repository will defiantly not be the bottleneck.
It’s obvious but VDI was might to be a green technology. Lots of power and cooling costs to be saved in the datacenter.
Purity , the operating system beyond Pure Storage is what gives the array it’s secret sauce. It’s worth pointing out that it fixes disk alignment issues. This is particular interesting since all linked clones are misaligned. Good practice says to refresh or recompose often but at least this would fix the issue instead of applying a band-aid. I thought the misaligned linked clone issue was fixed with View 5 but I don’t see it in the release notes.
The bad news is they still don’t have replication so you will have to figure out how to get your golden images and user profile information to your DR site. Replication is on the near term road map but this also adds to my next point. Most people are already going to have lots and lots of traditional storage arrays with replication. If I am a customer with replication on my old arrays, why not buy one of the claimed faster SSD arrays that offer more speed and do the tiering myself? Customers will have to make a cost \ operational impact trade off.
It’s also fair to mention that Pure Storage is still not GA yet. This young company looks to have some smart people at the helm and it will be interesting to revist them in the next couple of months. I will try to follow them on their journey and report back when Pure Storage + VDI hits the news.
I will post thier video once the Tech Field Day crew has it ready to go.
Other thoughts around Pure Storage:
Please feel free to comment.
My travel, accommodations, and meals were paid for by the Tech Field Day sponsors. Often there is swag involved as well. Like me, all TFD delegates are independents. We tweet, write, and say whatever we wish at all times, including during the sessions.