AHV – Configuring VM to run a nested hypervisor

Nutanix now provides limited support for nested virtualization, specifically nested KVM VMs in an AHV cluster as of AOS with AHV-20170830.58. Enabling nested virtualization will disable live migration and high availability features for the nested VM. You must power off nested VMs during maintenance events that require live migration.

To enable nested virtualization for a VM, run the following command on a CVM.

$ acli vm.update "VM NAME" cpu_passthrough=true


Changing the Timezone on the AHV Hosts

NOTE: AOS 5.5 now sets AHV time to UTC, so use the below to change older releases.

The timezone on an AHV host is by default set to Pacific Standard Time (PST), specifically to the Americas/Los_Angeles time configuration. You can change the time zone on all the AHV hosts in the cluster to your preferred time zone. Changing the timezone is safe in a production AHV host because this does not touch the system clock, but instead only references the system clock with the symbolically linked timezone.

Perform the following procedure to change the time zone on an AHV host:

Log on to a Controller VM with SSH by using your user credentials.
Locate the template file for your time zone by using one of the following commands:
nutanix@cvm$ ssh root@ ls -F \


nutanix@cvm$ ssh root@ ls -F \

Your time zone might be stored at the root level of zoneinfo or within a subdirectory. For example:

Create a backup of /etc/localtime, and then create a symbolic link to the appropriate template file. For example, if you want to set the time zone to UTC, run the following command:
nutanix@cvm$ allssh 'ssh root@ "date; \
ls -l /etc/localtime; \
mv /etc/localtime /etc/localtime.org; \
ln -s /usr/share/zoneinfo/UTC /etc/localtime; \
date; ls -l /etc/localtime"'


NFS v4 to Enable Acropolis File Services

Acropolis File Services (AFS) is a software-defined, scale-out file storage solution that provides a repository for unstructured data, such as home directories, user profiles, departmental shares, application logs, backups, and archives. Flexible and responsive to workload requirements, AFS is a fully integrated, core component of the Nutanix Enterprise Cloud Platform. At both of our .Next User conferences in Washington, D.C. and Nice France, NFS support for AFS was highlighted as a new feature to be added along with the current SMB support in an upcoming release.

NFS has been around almost as long as I have been breathing air as an eighties baby. Being an open standard, NFS has evolved over the years and now has different versions available. In most cases the version used is driven by the client that will be accessing the server. To this end Nutanix is entering the NFS space first with support for version 4 to go along with the current SMB support. NFS v4 is stable and has been going thru iterations since the 2000s. Most recent distributions of various platforms like Linux (CentOS, Ubuntu), Solaris, AIX use NFS v4 as the default client protocol and additional attention to security made it great easy choice.

More at the link below.

Full article posted on the Next Community Site


Demo: ESXi Backup with Nutanix Snapshots with HYCU

In addition to the support of Nutanix clusters that use Nutanix native AHV hypervisors, HYCU introduces the support for Nutanix environments that use VMware ESXi hypervisors. By using the native Nutanix storage layer snapshot technology, VMware snapshot stuns are avoided.


The Wish List for Nutanix Centric Enterprise backups?

I was asked today what I would look for for in a solution that was Nutanix focused for backup. Just quickly spit balling I came up with the following list:

Backup vendor must use ABS to eliminate the needs for multiple proxies or solve this using a similar method.

Backup vendor must support or show a road map of supporting AFS change file tracking? Minimum of supporting NAS backup

Backup vendor must support backing up volume groups.

Backup vendor must support backing up Nutanix snapshots that have been replicated from a remote site to a primary DC.

Backup vendor must support Change Region Tracking for AHV. ESXi is a plus.

Backup vendor must support synthetic full backups.

Backup vendor must have native support for X APP, list your App like SQL.

Backup vendor must have an open API.

Got any others to add?


Prism Central with Self Service Portal – Cheat Notes

The Prism Self Service feature represents a special view within Prism Central. While Prism Central enables infrastructure management across clusters, Prism Self Service allows end-users to consume that infrastructure in a self-service manner. Prism Self Service uses the resources provided by a single AHV cluster. (ESXi and Hyper-V are not supported platforms for Prism Self Service.)

    Nutanix recommends using the Chrome or Firefox browsers to deploy or install Prism Central (PC). Nutanix support has a KB if IE is the only allowed browser.
    In Prism Central 5.5, users that are part of nested groups cannot log on to the Prism Central web console.
    Always upgrade PC before Prism Element(your clusters)
    Want longer retention, go with a bigger PC instance due to the larger disk size.
    Prism Central and its managed clusters are not supported in environments deploying Network Address Translation (NAT).
    Best Practice to keep NNC on all managed cluster the same
    As of Prism Central 5.5, only User Principal Name (UPN) credentials are accepted for logon. the admin user must log on and specify a service account for the directory service in the Authentication Configuration dialog box before authentication for other users can start working.
    Name servers are computers that host a network service for providing responses to queries against a directory service, such as a DNS server. Changes in name server configuration may take up to 5 minutes to take effect. Functions that rely on DNS may not work properly during this time. If Prism Central is running on Hyper-V, you must specific the IP address of the Active Directory Domain Controller server, not the hostname. Do not use DNS hostnames or external NTP servers.
    Three primary roles when configuring Prism Self Service

      Prism Central administrator
      Self-service administrator
      Project user

    Prism Central administrator. The Prism Central administrator enables Prism Self Service and creates one or more self-service administrators. Prism Central administrators also create VMs, images, and network configurations that may be consumed by self-service users.

    Self-service administrator. The self-service administrator performs the following tasks:
    Creates a project for each team that needs self-service and adds Active Directory users and groups to the projects.
    Configures roles for project members.
    Publishes VM templates and images to the catalog.
    Monitors resource usage by various projects and its VMs and members, and then adjusts resource quotas as necessary.
    A Prism Central administrator can also perform any of theses tasks, but they are normally delegated to a self-service administrator.
    Self-service administrators have full access to all VMs running on the Nutanix cluster, including infrastructure VMs not tied to a project. Self-service administrators can assign infrastructure VMs to project members, add them to the catalog, and delete them even if they do not have administrative access to Prism Central.

Setting Up AD with SSP

    Users with the “User must change password at next logon” attribute enabled will not be able to authenticate to Prism Central. Ensure users with this attribute first login to a domain workstation and change their password prior to accessing Prism Central. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
    Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
    Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
    Port 3268 (LDAP – GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
    Port 3269 (LDAPS – GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
    With in a project:Allow collaboration: Check the box to allow any group member to see the VMs, applications, and other objects created by other members of the group. If this box is not checked, group members can see only the objects they create. The role assigned a group member determines the permissions that user has on objects created by other group members.
    Role Mapping – Prism matches AD group name using case sensitive checks, so if the group name defined under the role mapping in Prism has difference in the upper/lower characters than how it is defined in the AD, Prism will fail to perform the name mapping for the group.

    Ensure also that the customer is adding the “@domain_name” to the username when he is logging to PRISM central.


Docker Swarm with Nutanix Calm

Review -> What is Nutanix CALM?

Nutanix Calm provides a set of pre-seeded application blueprints that are available to you for consumption.

Docker Swarm is a clustering and scheduling tool for Docker containers. Lots of hype with Kubernetes right now and rightly so but Swarm is a great tool and still getting better. One of the blueprints available with Calm is Docker Swarm. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system. Swarm mode also exists natively for Docker Engine, the layer between the OS and container images. Swarm mode integrates the orchestration capabilities of Docker Swarm into Docker Engine. For AHV, by default blueprint creates 3 Master VMs with 2 Core, 4GB RAM, Root disk – 10GB, and Data Disk 3x10GB. For AWS, by default blueprint create 3 Slave VMs t2.medium, and Data Disk 3x10GB.

Installed Version- Docker - 17.09.0.ce


DOCKER_VERSION - (Mandatory) Docker version default.
INSTANCE_PUBLIC_KEY - (Mandatory) Instance public key (only for AHV).
Click Marketplace tab.
Click the Docker Swarm blueprint application.
Click Launch.
The blueprint application launch page is displayed.

Enter a name for the application in the Name of the Application field. For the application blueprint naming conventions, see Launching an Application Blueprint.
Select the Application profile.
If the application profile is Nutanix then do the following.
(Optional) Change the VM name.
(Optional) Change the number of vCPUs and RAM.
Select the NIC from the drop-down menu.
Download the CentOS 7 from the repository.
Enter the private key.
If the application profile is AWS then do the following.
(Optional) Change the VM name.
Select the instance type.
Select a CentOS 7 image as per the region and AZ.
Select the VPC and subnet.
Ensure that the security groups have access of ICMP port so that master and slave nodes can ping each other.

Select the SSH keys.
Repeat the above steps for docker slave services.


    Nutanix Calm Blueprints Overview

    Nutanix Calm Overview

    A blueprint is the framework for every application that you model by using Nutanix Calm. Blueprints are templates that describe all the steps that are required to provision, configure, and execute tasks on the services and applications that are created. You can create a blueprint to represent the architecture of your application and then run the blueprint repeatedly to create an instance, provision, and launch your applications. A blueprint also defines the lifecycle of an application and its underlying infrastructure starting from the creation of the application to the actions that are carried out on a blueprint until the termination of the application.

    You can use blueprints to model the applications of various complexities; from simply provisioning a single virtual machine to provisioning and managing a multi-node, multi-tier application.

    Blueprint editor provides a graphical representation of various components that enable you to visualize and configure the components and their dependencies in your environment.

    repeatable and auditable automation


    What is Nutanix CALM?

    Nutanix Calm allows you to seamlessly select, provision, and manage your business applications across your infrastructure for both the private and public clouds. Nutanix Calm provides App lifecycle, monitoring and remediation to manage your heterogeneous infrastructure, for example, VMs or bare-metal servers. Nutanix Calm supports multiple platforms so that you can use the single self-service and automation interface to manage all your infrastructure. Nutanix Calm provides an interactive and user friendly Graphical User Interface (GUI) to manage your infrastructure.

    Features of Nutanix Calm

    Application Lifecycle Management: Automates the provision and deletion of both traditional multi-tiered applications and modern distributed services by using pre-integrated blueprints that make management of applications simple in both private (AHV) and public cloud (AWS).

    Customizable Blueprints: Simplifies the setup and management of custom enterprise applications by incorporating the elements of each app, including relevant VMs, configurations and related binaries into an easy-to-use blueprint that can be managed by the infrastructure team. More Info on Blueprints.

    Nutanix Marketplace:
    Publishes the application blueprints directly to the end users through Marketplace.

    Governance: Maintains control with role-based governance thereby limiting the user operations that are based on the permissions.

    Hybrid Cloud Management
    : Automates the provisioning of a hybrid cloud architecture, scaling both multi-tiered and distributed applications across cloud environments, including AWS.


    Enabling AHV Turbo on AOS 5.5

    Nutanix KB 4987

    From AOS 5.5, AHV Turbo replaces the QEMU SCSI data path in the AHV architecture for improved storage performance.

    For maximum performance, ensure the following on your Linux guest VMs:

    Enable the SCSI MQ feature by using the kernal command line:
    scsi_mod.use_blk_mq=y ( I put this in a /etc/udev/rules.d/)

    Kernels older than 3.17 do not support SCSI MQ.
    Kernels 4.14 or later have SCSI MQ enabled by default.
    For Windows VMs, AHV VirtIO drivers will support SCSI MQ in an upcoming release.

    AHV Turbo improves the storage data path performance even without the guest SCSI MQ support.


    Perform the following to enable AHV Turbo on AOS 5.5.

    Upgrade to AOS 5.5.
    Upgrade to the AHV version bundled with AOS 5.5.
    Ensure your VMs have SCSI MQ enabled for maximum performance
    Power cycle your VMs to enable AHV Turbo.

    Note that you do not have to perform this procedure if you upgrading from AOS 5.5 to a later release. AHV Turbo will be enabled by default on your VMs in that case.