VMware Horizon View 6 + Nutanix: Full Clones? No Problem!

    View Composer was original designed to save capacity for Horizon View and then later was used to fix the IOPS issues for VDI. Nutanix can quickly provision machines without the need of View Composer and provide performance with it’s global flash pool and smart metadata.

    Any Horizon View Admin at some point has probably had to deal with a View Composer issue at some point(same is probably true for MCS). Maybe the database gets out of sync with vCenter, the View Composer credentials gets unknowing changed or someone deletes\moves the computer account of your golden images. Using Nutanix VMCaliber Clones 400 Full Clone desktops can be created in 49 Min! Only 4 more minutes than using View Composer with VCAI

    The machines clones in 8 – 12 seconds per desktop but the image being sysprep vs quick prep which View Composer provides is where the difference in time is accounted for.

    Test Results – 8 node cluster – 2 * 3460

    Sysprep causes ~20% increase in IOPS vs Quick Prep. The reads will be mostly served from cache so it’s not a big deal. Also most people that use full clones use them as persistent desktops and combine inline dedupe so they can leverage existing application deployment tools instead of having separate ways to deal with both physical and virtual desktops.

    Keep it Simple Stupid!



    VMware Horizon View 6 – Impact of VCAI

    View Composer Array Integration with Native NFS Snapshot Technology (VAAI) started off at tech preview in View 5.1 but now is fully supported. Below highlights the impact of not having VCAI support if you’re using View Composer in your environment to deploy desktops.

    Nutanix supports VCAI

    Everything on the lefthand of the line is a result of not having VCAI support. Your golden image has to be copied over to recreate the new replica image as thevbase for the new desktops. Over 11,000 IOPS are used in this example and over 700 MBps of bandwidth consumed. Then times this by how many golden images your team is supported plus the extra time it takes to create the copy the image over. There is also impact to users that have to work during the maintenance period.

    If you’re using VCAI your deployment journey would begin to the right of the line. Nutanix fully supports VCAI and also can give the ability to deploy full clones without View Composer.

    45 minutes to deploy 400 desktops with VCAI
    Time is saved by not having to do the full copy and VCAI does provide better caching of reads on Nutanix as well. Without VCAI it would have been north of 50 minutes and the performance tier would have be used instead of keeping it to deliver great user experience.

    What to see this in action at VMworld? Stop by the both 1535 for a demo.


    Nutanix + VMware Horizon 6 with View: 888 Desktops Off to On 6min (8 nodes)

    The clock started at 2:25:34 in vCenter and the watch stopped when all the agents reported back in the Horizon View Connection broker at 2:31:32. Boot storms can be avoided\planned for but bad things happen, maintenance windows and if you’re dealing with shift change like in Health Care it helps.

    Cluster IOPS:Over 50,000 IOPS

    Over 50,000 IOPS to boot the desktops. Most of the IOPS coming from local cache.

    Over 50,000 IOPS to boot the desktops. Most of the IOPS coming from local cache.

    CPU: Brief peak at 100% during the boot storm for the entire environment

    Cluster CPU did get on with these settings but thats to be expected. Regardless of storage PC's need CPU. Blue\Green line is CPU, pink memory

    Cluster CPU did get on with these settings but thats to be expected. Regardless of storage PC’s need CPU.
    Blue\Green line is CPU, pink memory

    Storage latency: Max 6ms, average 3ms during storm

    Boot Storm for 888 desktops on 2 * 3460. 4U of space plus an Arista switch

    Boot Storm for 888 desktops on 2 * 3460. 4U of space plus an Arista switch

    The settings used for View Composer. I would only recommend these settings for 8+ nodes.

    The settings used for View Composer. I would only recommend these settings for 8+ nodes.


    June IE Patch Blows UP IE for Optimized Golden Images

    After updating my golden image I was treated to IE not being able to launch.

    The registry key in question comes from the optimization script from VMware for Windows 7/8.

    reg ADD “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters” /v EnableSuperfetch /t REG_DWORD /d 0×0 /f

    I was able to find the fix on TechNet,

    HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management” MoveImages

    If you set the key to 1 instead of 0 then reboot the machine you’ll be all fixed up.

    Of course I found this after installing\installing IE 2 times first.


    VMware Horizon 6 with View – Hosted Shared Virtual Desktops with Nutanix

    With Horizon 6 adding support for RDS, Application pools is getting a lot of buzz. With application pools, you can deliver a single application to many users. The application runs on a farm of RDS hosts. However, you can use a farm of hosts to deliver hosted shared desktops (HSVD). I suspect in reality this will shift lots of workload over at least from the task worker use case.

    To get HSVD setup with Horizon 6 you have to:

    Once the farm is created you can go to desktop pools and pick the appropriate option.

    Nutanix Value

    1) Quick Clones\VMCaliber Clones – Horizon 6 does not support View Composer for RDS so there could be a potential for lots of storage to be gobbled up. VMCaliber Clones have no negative impact on performance, allow for fast deployment and is available in every Nutanix software edition.

    Check out the space saving from quick clones. 84 GB to 12KB

    Check out the space saving from quick clones. 84 GB to 12KB

    2) Data Locality & Fair Share – Fair Share from 2012 R2 to gives predictable user experience so one user does not negatively impact the performance of another user’s session. Combined with data locality as the cluster scales, IO performance will be consistent and not users can’t steal or bleed the rest of the performance from users on other nodes.

    3) Tunable Redundancy Factor – Starting with the Pro Software Edition and up you can allow VM’s to have greater resiliency by creating additional copies of data. Since VMCaliber clones(per vm snaphots\clones) are reducing the foot print the added capacity cost of higher Replication Factor will be mitigated. Now you can lose up to 2 nodes on a 5 node cluster as an example without having to buy additional HDD’s for capacity. Additional block awareness can let you lose an entire block (4 nodes/servers) at the same time without downtime, without requiring any extra space! This all adds up to more capacity for other server workloads in your environment.

    Hope see you at VMworld 2014 and talk more on this topic.



    Nutanix One-Click Upgrade: Easy As Picture Pages


    Dell & Nutanix – Building A Better Foundation for the SDDC

    Lots has been written about the business side of the deal and some of the long term strategy on why an OEM deal with Dell is so big for Nutanix. I want to focus my thoughts on the technical side. For me the partnership with Dell is a signaling that the “start-up” phase is over when it comes to shipping Enterprise software. The Dell sales force is massive and would have had the potential to sink Engineering efforts on support and lower the rate of new features if it wasn’t the effort put into the Nutanix Software deployment tool called Foundation. Foundation is the automation tool the deploys and configures the Nutanix software along with your hypervisor of choice(ESXi, Hyper-V, KVM). Foundation gives a common starting point for all deployments: performance, capacity, support, consistency and cluster initialization. Within 45 minutes to 60 minutes, you can have a platform to deploy virtual workloads without worrying missed steps or stealing resources from other projects. This story is complete if it’s Dell box or a Nutanix model. Without this tool in place the Dell agreement could have never of happened.

    In the Supermico only world Engineering could get away with hardcoding what type of SSD drive or RAID controller it was looking for. With the Dell coming onboard this really wasn’t going to scale. Foundation was really born out of the need to support multiple hypervisors and the Dell OEM agreement only increased its need.

    Foundation and Nutanix has moved to be data driven vs being hard coded. Foundation has a built-in HCL to qualify all components, Hypervisor version, NICS, RAID controllers(Nutanix doesn’t use RAID), motherboards when the software is being laid down. Components are checked against a JSON file and life goes on. It doesn’t matter if it’s IPMI from Supermico or iDRAC from Dell. All alerts can be fired through SNMP or seen visual through the PRISM UI. The customer is the one who wins here.

    Engineering has been hard at work for months with Dell before the announcement. The beauty of convergence is that you don’t have to worry about building the solution and find the breaking points, we do that for you. It’s also were software can provide a lot of value when you can understand the hardware you are working with. One HBA was that tabled to be used in some of the newer models had an issue of not turning on the LEDS when the drive went bad. I am sure it worked fine when RAID was involved but since we don’t use RAID it probably was never engineered for it. Nutanix Engineering was able to write the software to provide the serviceability needed. Imagine all the other nasty things that can happen with that one component alone. The adaptability of the NOS gives customers peace of mind as they scale out their environments.

    If you believe in the Software-Defined Data Center(SDDC), choice of hypervisor, hardware vendor and networking should allow for VM mobility. While the SDDC might just being getting past 1.0, Nutanix is not sitting still to see what 2.0 will bring. The Dell OEM agreement is a great sling shot towards the future.


    Why #Webscale Reason 5: Brain Drain, Training Budgets & Turnkey Solutions

    Companies like Google, Amazon and Facebook had to invent (code) new technologies and approaches to doing IT because no alternative to traditional IT existed. Lots of the technologies surrounding this can be complicated and does take a highly trained team to forge ahead. Web-scale is not an all-or-nothing proposition. Today we’ve reached a point where the principles and patterns are well-understood and turnkey enterprise-class solutions are emerging to bring web-scale capabilities to the enterprise. These don’t require PhDs to operate. Even some of the industry storage giants like EMC are trying to deploy similar technologies to provide true scale out technologies. Nutanix has been building upon these technologies since 2009 so people can do more with less. An IT Admin has the option of never leaving the Prism UI if they want.

    Like it or not, enterprise IT is fighting with the cloud for relevance. Enterprise IT is not that way by choice. The politics and finger-pointing is what traditional infrastructure constraints and complexity have created. Budget constraints are all the more reason why you need an alternative. If you have the opportunity to learn one skill to save countless hours down the road is that not a fair price to pay? I remember an old boss questioning my VMware 3.0 training over the same things. Do I need it? Is it valuable? Many of the skills that were considered niche 5 years ago are now mainstream. Companies like Nutanix are eliminating the need for specialized talent by delivering turnkey solutions that are web-scale inside but provide enterprise capabilities, offering the best of both worlds.

    The reason why VMware SRM was invited was so people could get out of the weeds of scripting and engineering their DR plans. When people changed jobs or left the company you wouldn’t have to be worried about the next lady/man stepping into fill their shoes and figure the failover process if a disaster were to occur.

    With any new technology or paradigm shift there needs to be a way to bridge the gap between the two worlds. The difference between Public vs Private cloud in this case is learning a UI and hiding the complexity. Virtualization is a key aspect to Nutanix so a lot skills will work in the old and the new land of the datacenter.


    Why #Webscale Reason #4: Machine Data & Analytics #Nutanix #Linkedin

    When you open up your infrastructure up to API’s and have a platform to automate all aspects it allows a common management and analytic platform. Silo’s of infrastructure not only put additional strain not only for storage performance with the IO blender effect but also managing the wealth of data that is generated. Google’s ability to collect and analyze has changed the game for them. Having different hardware, different data centers and different use cases to contend with, it’s all about managing the whole story and seeing problems before they end up on your CIO’s dashboard. This can really only be with a shared nothing architecture.

    Look at how LinkedIn is doing it. Similar aspects to the Nutanix design.

    Want to learn more, great live info coming here.


    Why #Webscale Reason #3: It’s about the people – #Twitter #APIgee #DataStax #Nutanix

    It’s not all about wing dings and nuts & bolts. It’s easy to get lost in the weeds of technology and forget the greater purpose of why a IT department exists. When technology religion starts to dictate what is right for business it can easily turn into a dead end street. People and process are the hardest things inside of tech and where web-scale plays a part. Web-scale is about launch first, optimize later. Focusing what you’re good at and getting to the last 10% can be iterative process. It’s not about speeds and feeds, it’s about getting your teams to focus on the business and work together. It’s breaking down tradtional silos and helping move the needle. I believe the general sysadmin will have a long life ahead of them versus people that are totally focused in one area.

    At Nutanix we have no religion on hardware. Today we OEM through Super Micro, tomorrow we could switch if the economics made sense, performance and form factor made sense.

    Launch first has allowed Nutanix to get to MapReduce Dedupe (Post-Process) probably in one of the quickest fashions. It started with inline dedupe for performance, it was put into production and built upon work from out Medusa/Cassadonra team. Then MapReduce Dedupe came focusing on OS and application data. Over time more algorithms will be added to MapReduce Dedupe which will potentially lead to more features.

    From a customer perspective launch first gives you more options to make a better descsion. This is another reason why hybrid cloud will succeed.

    “If all you have is a hammer, everything looks like a nail”

    Catch a live tech panel on Wednesday June 25th, 2014 – 10:00AM–10:45AM PDT

    Designing and Building Web-scale Systems

    Panel line-up:

    Dmitriy Ryaboy (Engineering lead at Twitter)
    Karthik Ranganathan (Engineer at Nutanix)
    Anant Jhingran (CTO of APIgee, IBM Fellow)
    Darshan Rawal (Director of Product Management, DataStax)