Archives for August 2014


VMware EUC & Nutanix Relationship via @theCUBE

Courtney Burry & Sachin Chheda on theCUBE

* Talks about VMware and Nutanix partnership.

* Partnership on the Horizon 6 RA

* Workspace is about applications


Nutanix CEO, Going past Storage on the road to IPO via @theCUBE

* The beginnning of the end of the SAN

* Nutanix Bible

* Convergence of Networks

* Docker


EVO RAIL: Status Quo for Nutanix

boatSome will make a big splash about the launch of EVO RAIL but the reality is that things remain status quo. While I do work for Nutanix and I am admittedly biased, the fact is that Nutanix as a company was formed in 2009 and has been selling since 2011. VSAN and now EVO RAIL is a validation of what Nutanix has been doing over the last 5 years. In this case, high tide lifts all boats.

Nutanix will continue to partner with VMware for all solutions, just like VDI, RDS, SRM, Server Virt, Big data applications like Splunk and private cloud. Yes we will compete with VSAN but I think the products are worlds apart mostly due to architectural decisions. Nutanix helps to sell vSphere and enable all the solutions that VMware provides today. Nutanix has various models that serve Tier 1 SQL\Oracle all the way down to the remote branch where you might want only a hand full of VM’s. Today EVO RAIL is only positioned to serve only Tier 2, Test/Dev and VDI. The presentation I sat in on as a vExpert confirmed Teir 1 was not a current use case. I do feel that this is mistake for EVO RAIL. By not being able to address Tier 1 which I would include VDI in the use case, you end up creating silos in the data center which is everything that SDDC should be trying to eliminate.

Nutanix Uses Cases

Some of the available Nutanix Uses Cases


Nutanix is still King of Scale but I am interested to hear more about EVO RACK which still in tech preview. EVO RAIL in version 1.0 will only scale to 16 nodes\servers or 4 appliances. Nutanix doesn’t really have a limit but tends to follow hypervisor limits, most Nutanix RA’s are around 48 nodes from a failure domain perspective.

Some important differences between Nutanix and EVO RAIL:

* Nutanix starts at 3 nodes, EVO RAIL starts at 4 nodes.

* Nutanix uses Hot Optimized Tiering based on data analysis and cache from RAM which can be deduped, EVO RAIL uses caching from SSD(70% of all SDD is used for cache).

* You can buy 1 Nutanix node at a time, EVO RAIL only is sold with 4 nodes at a time. Though I think this has do with trying to keep a single sku. The SMB in the market will find it had to make this jump though. On the Enterprise side you need to be able to have different node types if your compute\capacity doesn’t match up.

* Nutanix can scale with different node types ranging in different levels of storage and compute, EVO RAIL today is a hard locked configuration. You are unable to even change the amount of RAM from the OEM vendor. CPU’s are only 6 core which leads to needing more nodes = more licenses.

* EVO RAIL is only spec’d for 250 desktops\100 general server VM’s per appliance. Nutanix can deliver 440 desktops per 2U appliance with a medium Login VSI workload and 200 general server VM’s when enabling features like inline dedupe on the 3460 series. In short we have no limits if you don’t have CPU\RAM contention.


* Nutanix has 1 Storage Controller(VM) per host that takes cares of VM Cailber Snapshots, inline compression, inline Dedupe, Map Reduce Dedupe, Map Reduce compression, Analytics, Cluster Health, Replication, hardware support. EVO Rail will have a EVO management software(web server), vCenter VM, Log insight VM and a VM from the OEM Vendor for hardware support and vSphere replication VM if needed.

* Nutanix is able to have separation between compute and storage clusters. EVO RAIL is one large compute cluster with only storage container. By having separation you can have smaller compute clusters and still enjoy one giant volume. This is really just an issue of having flexibility on design.

* Nutanix can run with any license of vSphere, EVO RAIL license is Enterprise Plus. I am not sure how that will affect pricing. I suspect the OEM will be made to keep it at normal prices because if would affect the rest of their business.

* Nutanix can manage multiple large\small cluster with Prism Central. EVO RAIL has no multi-cluster management.

* Nutanix you get to use all of the available hard drives for all of the data out of the box. EVO RAIL you have to increase the stripe width to take advantage of all the available disks when data is moved from cache
to hard disk.

* Nutanix offers both Analysis and built in troubleshooting tools in the Virtual Storage Controller. You don’t have to add another VM in to provide the services.

Chad Sakac mentioned in one of his articles “my application stack has these rich protection/replication and availability SLAs – because it’s how it was built before we started thinking about CI models””, that you might not pick EVO RAIL and go to a Vblock. I disagree on the CI part. Nutanix has the highest levels of data protection today. Synchronous writes, bit rot prevention, all data is check summed, data is continuously scrubbed in low periods, Nutanix based snapshots for backup and DR.

It’s a shame that EVO RAIL went with the form factor they did. VSAN can lose up to 3 nodes at any one time which is good but in the current design it will need5 copies of data to ensure that a block going down will not cause data loss when you go to scale the solution. I think they should have stayed with a 1 node – 2 U solution. Nutanix has a feature called Availability Domains that allows us to survive a whole block going down and the cluster can still function. This feature doesn’t require any additional storage capacity to use the feature, just the minimum two copies of data.

More information on Availability Domains can be found on the Nutanix Bible


* Nutanix can Scale past 32 nodes, VSAN is supported for 32 nodes but yet EVO RAIL is only supported for 16 nodes. I don’t know why they made this decision.

* Prism Element has no limits to the number objects that it can manage. EVO RAIL is still limited by the number of components. I believe that the limited hardware specs are being used to limit the number components so this does not become an issue in the field.

* Nutanix when you add a node you can enjoy the performance benefits right away. EVO RAIL you have to wait until new VM’s are created to make use of the new flash and hard drives(or a perform a maintenance operation). Lot of this has to do on how Nutanix controls the placement of data, data locality helps with this.

I think the launch of EVO RAIL shows how important hardware still is when achieving 5 9’s of availability. Look out dual headed storage architectures, your lunch just got a little smaller again.


VMware Horizon 6.0 with View – 445 desktops Created in 54min on Nutanix

Check out Booth #1535 at VMworld 2014 – Below is a video of using VCAI with VMware Horizon with View. 445 desktops created in 54 minutes using a 3460 in 2U!


Scale-Out Storage In the hypervisor kernel or in a VM?

The paper isn’t new but provides some thoughts around architecture as people roam the trade-show floor at VMworld this year. The paper highlights architectural considerations with implementing a converged, scale-­‐out storage fabric that run across a cluster of nodes. This paper focuses on high availability and resiliency for virtualizing business critical applications. The paper covers running storage services embedded in the hypervisor kernel and as virtual machine in the user space.

Read the Tech Note

Nutanix does have a patent around delivering distributed storage services as a VM as well.


Pay-As-You-Grow with Nutanix and VMware Horizon 6

Dwayne Lessner
Technical Marketing Engineer, Nutanix

As an end user of VMware End User Computing (EUC) products since 2008 when the product was called Virtual Desktop Manager 2.1, it’s been a great journey to see how VMware Horizon 6 with View has morphed into a full fledge application delivery platform. The latest version of Horizon offering the ability to deliver virtual desktops and hosted applications with Microsoft Windows Remote Desktop Services arms businesses with the right deployment method based on use case and cost.

Today customers can take VMware’s EUC strategy and deploy with the same speed and approach of Public Cloud Providers with the security and control over SLA’s in the comfort of their own datacenters. Nutanix with the support VMware is proud to release our latest Reference Architecture on VMware Horizon 6 with View that showcases benefits of pay-as-you-grow infrastructure.

Some of the highlights include:

Nutanix and VMware EUC customers like Serco and Langs Building Supplies enjoy flexible, fast and simple deployments without making concessions for reliability or performance. Nutanix customers can embrace the blend of convergence and web-scale technologies to focus on deploying applications on their own terms.

Read the full reference architecture with Horizon 6 with View on Nutanix and stop by booth #1535 at VMworld to learn about new features that Nutanix and VMware EUC is bringing to the datacenter.

This has been cross posted on &


VMware Horizon View 6 + Nutanix: Full Clones? No Problem!

View Composer was original designed to save capacity for Horizon View and then later was used to fix the IOPS issues for VDI. Nutanix can quickly provision machines without the need of View Composer and provide performance with it’s global flash pool and smart metadata.

Any Horizon View Admin at some point has probably had to deal with a View Composer issue at some point(same is probably true for MCS). Maybe the database gets out of sync with vCenter, the View Composer credentials gets unknowing changed or someone deletes\moves the computer account of your golden images. Using Nutanix VMCaliber Clones 400 Full Clone desktops can be created in 49 Min! Only 4 more minutes than using View Composer with VCAI

The machines clones in 8 – 12 seconds per desktop but the image being sysprep vs quick prep which View Composer provides is where the difference in time is accounted for.

Test Results – 8 node cluster – 2 * 3460

Sysprep causes ~20% increase in IOPS vs Quick Prep. The reads will be mostly served from cache so it’s not a big deal. Also most people that use full clones use them as persistent desktops and combine inline dedupe so they can leverage existing application deployment tools instead of having separate ways to deal with both physical and virtual desktops.

Keep it Simple Stupid!



VMware Horizon View 6 – Impact of VCAI

View Composer Array Integration with Native NFS Snapshot Technology (VAAI) started off at tech preview in View 5.1 but now is fully supported. Below highlights the impact of not having VCAI support if you’re using View Composer in your environment to deploy desktops.

Nutanix supports VCAI

Everything on the lefthand of the line is a result of not having VCAI support. Your golden image has to be copied over to recreate the new replica image as thevbase for the new desktops. Over 11,000 IOPS are used in this example and over 700 MBps of bandwidth consumed. Then times this by how many golden images your team is supported plus the extra time it takes to create the copy the image over. There is also impact to users that have to work during the maintenance period.

If you’re using VCAI your deployment journey would begin to the right of the line. Nutanix fully supports VCAI and also can give the ability to deploy full clones without View Composer.

45 minutes to deploy 400 desktops with VCAI
Time is saved by not having to do the full copy and VCAI does provide better caching of reads on Nutanix as well. Without VCAI it would have been north of 50 minutes and the performance tier would have be used instead of keeping it to deliver great user experience.

What to see this in action at VMworld? Stop by the both 1535 for a demo.


Nutanix + VMware Horizon 6 with View: 888 Desktops Off to On 6min (8 nodes)

The clock started at 2:25:34 in vCenter and the watch stopped when all the agents reported back in the Horizon View Connection broker at 2:31:32. Boot storms can be avoided\planned for but bad things happen, maintenance windows and if you’re dealing with shift change like in Health Care it helps.

Cluster IOPS:Over 50,000 IOPS

Over 50,000 IOPS to boot the desktops. Most of the IOPS coming from local cache.

Over 50,000 IOPS to boot the desktops. Most of the IOPS coming from local cache.

CPU: Brief peak at 100% during the boot storm for the entire environment

Cluster CPU did get on with these settings but thats to be expected. Regardless of storage PC's need CPU. Blue\Green line is CPU, pink memory

Cluster CPU did get on with these settings but thats to be expected. Regardless of storage PC’s need CPU.
Blue\Green line is CPU, pink memory

Storage latency: Max 6ms, average 3ms during storm

Boot Storm for 888 desktops on 2 * 3460. 4U of space plus an Arista switch

Boot Storm for 888 desktops on 2 * 3460. 4U of space plus an Arista switch

The settings used for View Composer. I would only recommend these settings for 8+ nodes.

The settings used for View Composer. I would only recommend these settings for 8+ nodes.