Nutanix Acropolis File Services – Required 2 Networks

When configuring Acropolis File Services you may be prompted with the following message:

“File server creation requires two unique networks to be configured beforehand.”

The reason is you two managed networks for AFS. I’ve seen this come up a lot lately so I thought I would explain the why. While it may change over time this is the current design.


The above diagram shows one file server VM running on a node, but you can put multiple file server VMs on a node for multitenancy.

The file server VM has two network interfaces. The first interface is a static address used for the local file server VM service that talks to the Minerva CVM service running on the Controller VM. The Minerva CVM service uses this information to manage deployment and failover; it also allows control over one-click upgrades and maintenance. Having local awareness from the CVM enables the file server VM to determine if a storage fault has occurred and, if so, if action should be taken to rectify it. The local address also lets the file server VM claim vDisks for failover and failback. The file server VM service sends a heartbeat to its local Minerva CVM service each second, indicating its state and that it’s alive.
The second network interface on the file server VM, also referred to as the public interface, allows clients to service SMB requests. Based on the resource called, the file server VM determines whether to service the request locally or to use DFS to refer the request to the appropriate file server VM that owns the resource. This second network can be dynamically reassigned to other file server VM’s for high availability.

If you need help setting up the two managmed networks there is KB article on portal.nutanix.com -> KB3406


Backing up AFS with Commvault

This is by no means a best practice guide for AFS and Commvault but I wanted to make sure that Commvault could be used to backup Acropolis File Services (AFS). If want more details on AFS I suggest reading this great post on the Nutanix website.

Once I applied the file server license to CommServe I was off to the races. I had 400 users spread out on 3 file server VMs making up the file server called eucfs.tenanta.com. The file server had two shares but I was focused on backing up the user share.



I found performance could be increased by adding more readers for the backup job. My media agent was last configured with 8 vCPU and it seemed to be the bottleneck. If I were to give the media agent more CPU I am sure I would have had a even faster backup time.


I was able to get almost 600 GB/Hour which I am told is a good number for file backup. There looks like there is lots of room to improve though. The end goal will be to try and backup a million files and see what happens over the course of time.


Like all good backup stories, it’s all about the restores and it appears to drill down real nice.



Just In Time Desktops (Instant Clones) on Nutanix

JIT desktops are supported on Nutanix. One current limitation of JIT is that it doesn’t support VAAI for NFS Hardware Clones. The great part for Nutanix customers is that were VAAI clones stop, shadow clones kicks into affect! So if you want to keep a lower amount of RAM for configured for the View Storage Accelerator your perfectly OK in doing that.

The Nutanix Distributed Filesystem has a feature called ‘Shadow Clones’ which allows for distributed caching of particular vDisks or VM data which is in a ‘multi-reader’ scenario. A great example of this is during a VDI deployment many ‘linked clones’ will be forwarding read requests to a central master or ‘Base VM’. In the case of VMware View this is called the replica disk and is read by all linked clones. This will also work in any scenario which may be a multi-reader scenario (eg. deployment servers, repositories, App Volumes, etc.)

You can read more about Shadow CLones in this Tech Note -> HERE

An Introduction to Instant Clones -> HERE


    Chad Sakac talks about EMC selling Nutanix with Dell Technologies

    What will happen with Dell XC when EMC and Dell come together? Chad Sakac talks about it at the 18:40 mark from the ThinkAhead IT conference.

    From NextConf 2016
    Nutanix and Dell OEM relationship: Dell’s Alan Atkinson spoke to attendees about extending the OEM relationship and continuing to help our joint customers (including Williams) on their journeys to Enterprise Cloud in confidence.


    Commvault Best Practices on Nutanix

    I first remember seeing Commvault in 2007 in the pages of Network World and thought it looked pretty interesting then. At the time I was an CA ARCserve junky and prayed everyday I didn’t have to restore anything. Almost 10 years latter tape is still around, virtualization spawned countless backup vendors and Cloud now makes a easy backup target. Today Commvault is still relevant and plays in all of the aforementioned spaces and like most tech companies we have our own overlap with them to some degree. For me Commvault just has so many options it’s almost a problem of what to use where and when.

    The newly released Best Practice Guide with Commvault talks about some of the many options that should be used with Nutanix. Probably the big things that would stand out in my mind if I was new to Nutanix and then read the guide would be the use of a proxy on every host and some of the caveats around Intellisnap.

    Proxy On Every Host

    What weights more? A pound of feathers or a pound of bricks? The point here is you need a proxy regardless and the proxy is sized on how much data you will be backing up. So instead of having 1 giant proxy you now have smaller proxies that are distributed across the cluster. Smaller proxies can read from local Hot SSD tier and limit network traffic so they can help to limit bottlenecks in your infrastructure.

    IntelliSnap is probaly one of the most talked about Commvault features. IntelliSnap allows you to create a point-in-time application-consistent snapshot of backup data on the DSF. The backup administrator doesn’t need to log on to Prism to provide this functionality. A Nutanix-based snapshot is created on the storage array as soon as the VMware snapshot is completed; the system then immediately removes the VMware snapshot. This approach minimizes the size of the redo log and shortens the reconciliation process to reduce the impact on the virtual machine being backed up and minimize the storage requirement for the temporary file. It also allows near-instantaneous snapshot mounts for data access.

    With IntelliSnap it’s important to realize that it was invented at a time where LUNS ruled the storage workload. IntelliSnap in some sense turns Nutanix’s giant volumes/containers the hypervisors sees into a giant LUN. Behind the scenes when Intellisnap is used it snaps the whole container regardless if the VMs are being backed up or not. So you should do a little planning when using IntelliSnap. This is ok since IntelliSnap should be used for high transnational VMs and not every VM in the data center. I just like to point out that streaming backups with CBT is still a great choice.

    With that being said you can checkout the full guide at the Nutanix Website: Commvault Best Practices


    Quickly Pin Your Virtual Hard Drive To Flash #vExpert #NTC

    If you need to ensure performance with Flash Mode here is a quick way to get your job done.

    Find the disk UUID
    ncli virtual-disk ls | grep -B 3 -A 6


    ncli virtual-disk ls | grep m1_8 -B 3 -A 6

    Virtual Disk Id : 00052faf-34c2-58fc-64dd-0cc47a673b8c::313a49:6000C29b-93c9-bfe1-58d9-e718993e5a06
    Virtual Disk Uuid : 1dc11a7f-63ac-422a-ac27-442d5fcfc91a
    Virtual Disk Path : /hdfs/cdh-m1/cdh-m1_8.vmdk
    Attached VM Name : cdh-m1
    Cluster Uuid : 00052faf-34c2-58fc-64dd-0cc47a673b8c
    Virtual Disk Capacity : 268435456000
    Pinning Enabled : Flase

    Set 25 GB to pin to flash of the vdisk
    ncli virtual-disk update-pinning id=00052faf-34c2-58fc-64dd-0cc47a673b8c::313a49:6000C29b-93c9-bfe1-58d9-e718993e5a06 pinned-space=25 tier-name=SSD-SATA

    Pinned Space is in GB.

    In this case I was pinning a Hadoop NameNode directories to flash because I wanted to include their physical node in the cluster to help with replication traffic.


    App Volumes 2.10 Best Practices

    I know App Volumes 3.0 is around the corner but I had to track this information down for 2.10 for a customer.

    App Volume Manager Best Practices
    2 App Volume Managers minimum – 3 for Resiliency for 2,000 users
    Load Balancer in Production
    Cluster SQL Server

    AppStack Best Practices
    1 AppStack per 1000 attachments
    Up to 15 AppStacks Volumes per VM
    2,000 users per Manager
    Timeout is 3 minutes each per Writeable volume and then AppStack


    Commvault IntelliSnap & Metro Availability with Nutanix

    I was asked if Commvault IntelliSnap works with Metro Availability on Nutanix and I wasn’t 100% certain due to the snapshots that Metro production domain would have to take. So after giving it a quick test, it turns out it works just fine.

    Setting your retention policy

    Setting your retention policy

    I would just take into account that if you have separate jobs running that each time it will take a snapshot of the container and then your retention policy will come into affect for the production domain.

    Below is quick video of the process in action.


    Cluster Health & One-Click Help To Win Award

    Nutanix received the Omega NorthFace Scoreboard Award for the third consecutive year. This industry-leading award demonstrates Nutanix’s on-going commitment to building sustainable, long-term customer loyalty. According to Omega Group, Nutanix’s Net Promoter Score improved to 92 from last year’s score of 88. The gold standard in customer experience management, NPS measures the willingness of customers to recommend a company. NPS scores can range from -100 to 100.

    OK but what does that have to do with Cluster Health and One-Click Upgrades? Simply put when you call into Nutanix support you get someone right away regardless of your SLA’s most times. Nutanix support is not stuck doing mundane upgrades and trivial support. Cluster Health and One-Click free customers time but also the support staff so they can work on real problems. This also allows Nutanix to pay and retain the best support staff in the industry. It’s not odd getting a CCIE storage or networking support member on the phone.

    Unsolicited Customer Feeback

    With the release of 4.6 Nutanix can provide 1 click upgrades for:

    Our Acropolis software
    Disk firmware

    With Cluster Health in 4.6 there are over 200 health checks that automatically run in combination with Nutanix Cluster Check! There was so many checks I had to copy them into a spreadsheet to count them all. There is also the added benefit of knowing the hardware so we can’t get very granular with the checks.

    Cluster Health is warning about space usage.

    Cluster Health is warning about space usage.

    Congrats to the Nutanix Support Team for another great win.


    Docker UCP and Cloud-init with Nutanix (Video)

    In Acropolis 4.6 Nutanix added guest customization for the Acropolis Hypervisor.
    Cloud-init + Docker

    In an Acropolis cluster, you can use Cloud-init to customize Linux VMs and the System Preparation (Sysprep) tool to customize Windows VMs. I used Cloud-init to clone a Nutanix VM and then have it automatically join the Docker UCP swarm cluster.

    About Cloud-Init

    Cloud-init is a utility that is used to customize Linux VMs during first-boot initialization. The utility must be pre-installed in the operating system image used to create VMs. Cloud-init runs early in the boot process and configures the operating system on the basis of data that you provide (user data). You can use Cloud-init to automate tasks such as setting a host name and locale, creating users and groups, generating and adding SSH keys so that users can log in, installing packages, copying files, and bootstrapping other configuration management tools such as Chef, Puppet, and Salt. For more information about Cloud-init, see https://cloudinit.readthedocs.org/.

    Customization Process

    You can use Cloud-init or Sysprep both when creating and when cloning VMs in a Nutanix cluster. For unattended provisioning, you can specify a user data file for Cloud-init and an answer file for Sysprep. All Cloud-init user-data formats are supported. For example, you can use the Cloud Config format, which is written in YAML, or you can provide a multi-part archive. To enable Cloud-init or Sysprep to access the script, the Acropolis base software creates a temporary ISO image that includes the script and attaches the ISO image to the VM when you power on the VM.

    You can also specify source paths to the files or directories that you want to copy to the VM, and you can specify the target directories for those files. This is particularly useful if you need to copy software that is needed at start time, such as software libraries and device drivers. For Linux VMs, the Acropolis base software can copy files to the VM.

    Docker UCP
    Universal Control Plane comes with the capabilities an enterprise needs for Docker: from LDAP/AD integration, ability to deploy on-premises, to high availability and the ability to integrate to manage your networks and volumes – the controls that any enterprise IT operations team needs. I think UCP can bring developers and IT operations teams together, Universal Control Plane provides a quick and easy way to build, ship, and run distributed apps from a single Docker framework. Docker UCP can be a great place to land on Nutanix with Prism because they are both are scale out control planes and is also easy to use and automate.

    Built in security, and integration with existing LDAP/AD for authentication and role based access control, as well as native integration with Docker Trusted registry. The integration with Docker Trusted Registry allows enterprises to leverage Docker Content Trust (Notary in the opensource world), a built in security tool for signing images. UCP is the only tool on the market that comes comes with Docker Content Trust directly out of the box. With these integrations Universal Control Plane gives enterprise IT security teams the necessary control over their environment and application content. When you combine UCP with the Nutanix security it makes for a compelling story.

    Getting Cloud-init to Automatically Add VM’s to UCP
    Outside of some firewall issues most of my time was spent getting my yaml commands to work since it was brand new to me. I have to thank Abhishek Arora and Steve Poitras for helping me out as well.

    Here is my basic commands for getting the job done.


    - [ sh, -xc, "IP=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/')" ]
    - [ sh, -xc, "docker run --rm -i --name ucp -e UCP_ADMIN_USER=admin -e UCP_ADMIN_PASSWORD=nutanix -v /var/run/docker.sock:/var/run/docker.sock docker/ucp join --url --san $IP --host-address $IP --fingerprint=7E:8F:EC:A2:F3:E5:9D:46:AC:8E:2B:22:8D:81:3A:C7:5B:1C:92:48 --fresh-install "]

    The first line grabs the new IP of the VM and then uses it to join the Docker UCP cluster.

    Docker UCP and Cloud-init in Action

    Let me know if you any thoughts or questions.


    Docker Container Best Practices on Nutanix