AFS Cluster Network Change (Re-IP AFS) : Because Things Change

You can now change the managed/unmanaged network settings of the file server FSVM cluster to support moving the cluster from one datacenter to another, updating VLAN settings, or other use cases where you need to change network information. For unmanaged networks, you can change the FSVM cluster virtual IP address or IP address of each FSVM. Now that its in the GUI, less human error and more time for coffee.


AFS 3.0 Brings NFS for the Busy Developer

AFS 3.0 brings NFS v4 Support! This is the rocket ship that your software builds needed! No longer are your build piplines stuck with the lonely power of a single NAS head servicing your workloads.

AFS with NFS support enables you to manage a collection of NFS exports distributed across multiple file server VMs (FSVMs, think NAS head). With NFS, users can now use Linux and UNIX clients with AFS. This feature is also hypervisor agnostic.

AFS supports two types of NFS exports:

Distributed. A distributed export (“sharded”) means the data is spread across all FSVMs to help improve performance and resiliency. A distributed export can be used for any application. It is distributed at the top-level directories and does not have files at the root of the export. If you give a developer one share and each build goes into the share as top-level directory watch out, you might not have time for coffee.

1 Share, multiple top-level directories, multiple NAS heads.

Non-distributed. A non-distributed export (“non-sharded”) means all data is contained in a single FSVM. A non-distributed export is used for any purpose that does not require a distributed structure. If you have 10’s or 1000’s of exports they will be placed among all of the FSVM/NAS heads!

Best of all, one click upgrades and the Nutanix ease of use makes this a slam dunk to deploy and maintain.


Docker Datacenter beta and Nutanix Kubernetes Volume Plugin

I got to test Persistent Volume Claims with DDC 3.0 beta from Docker and the Nutanix Kubernetes Volume Plugin. The automatic clean up was pretty interesting and slightly scary if you didn’t know about it!



AHV – Configuring VM to run a nested hypervisor

Nutanix now provides limited support for nested virtualization, specifically nested KVM VMs in an AHV cluster as of AOS with AHV-20170830.58. Enabling nested virtualization will disable live migration and high availability features for the nested VM. You must power off nested VMs during maintenance events that require live migration.

To enable nested virtualization for a VM, run the following command on a CVM.

$ acli vm.update "VM NAME" cpu_passthrough=true


Changing the Timezone on the AHV Hosts

NOTE: AOS 5.5 now sets AHV time to UTC, so use the below to change older releases.

The timezone on an AHV host is by default set to Pacific Standard Time (PST), specifically to the Americas/Los_Angeles time configuration. You can change the time zone on all the AHV hosts in the cluster to your preferred time zone. Changing the timezone is safe in a production AHV host because this does not touch the system clock, but instead only references the system clock with the symbolically linked timezone.

Perform the following procedure to change the time zone on an AHV host:

Log on to a Controller VM with SSH by using your user credentials.
Locate the template file for your time zone by using one of the following commands:
nutanix@cvm$ ssh root@ ls -F \


nutanix@cvm$ ssh root@ ls -F \

Your time zone might be stored at the root level of zoneinfo or within a subdirectory. For example:

Create a backup of /etc/localtime, and then create a symbolic link to the appropriate template file. For example, if you want to set the time zone to UTC, run the following command:
nutanix@cvm$ allssh 'ssh root@ "date; \
ls -l /etc/localtime; \
mv /etc/localtime /etc/localtime.org; \
ln -s /usr/share/zoneinfo/UTC /etc/localtime; \
date; ls -l /etc/localtime"'


NFS v4 to Enable Acropolis File Services

Acropolis File Services (AFS) is a software-defined, scale-out file storage solution that provides a repository for unstructured data, such as home directories, user profiles, departmental shares, application logs, backups, and archives. Flexible and responsive to workload requirements, AFS is a fully integrated, core component of the Nutanix Enterprise Cloud Platform. At both of our .Next User conferences in Washington, D.C. and Nice France, NFS support for AFS was highlighted as a new feature to be added along with the current SMB support in an upcoming release.

NFS has been around almost as long as I have been breathing air as an eighties baby. Being an open standard, NFS has evolved over the years and now has different versions available. In most cases the version used is driven by the client that will be accessing the server. To this end Nutanix is entering the NFS space first with support for version 4 to go along with the current SMB support. NFS v4 is stable and has been going thru iterations since the 2000s. Most recent distributions of various platforms like Linux (CentOS, Ubuntu), Solaris, AIX use NFS v4 as the default client protocol and additional attention to security made it great easy choice.

More at the link below.

Full article posted on the Next Community Site


Demo: ESXi Backup with Nutanix Snapshots with HYCU

In addition to the support of Nutanix clusters that use Nutanix native AHV hypervisors, HYCU introduces the support for Nutanix environments that use VMware ESXi hypervisors. By using the native Nutanix storage layer snapshot technology, VMware snapshot stuns are avoided.


HYCU – Backing up virtual machines from Replicated Nutanix Snapshots

In remote office/branch office (ROBO) environments, HYCU allows you to back up virtual machines from their replicas on the central site Nutanix cluster. Backing up data from replicas without having to transfer the virtual machine data twice frees up WAN bandwidth for other business processes.

To be able to back up virtual machines from their replicas, make sure that the replication retention on the Nutanix cluster is adjusted to the backup policy’s RPO. This allows HYCU to use the Changed Region Tracking (CRT) feature to get a list of changed data since the last snapshot and perform an incremental backup. For example, if the Nutanix schedule interval is two hours and the RPO of the HYCU backup policy is eight hours, the retention policy for the remote site must be set to 4 or more snapshots (that is, at least the last four snapshots must be kept).


    On-Prem or Cloud Hosted Kubernetes?

    Should I host by Kubernetes cluster in the Cloud or keep it with in my own data center? Nutanix Calm can deploy on-prem or to a Cloud provider so what should I do?

    Some questions I was thinking about when talking to peers. I would love this list to be builted overtime.

    Do you require a connection back to the corporate network?
    Where will your logging and management be located? Sending data back to a corp site needs to be factored into the cost.

    Data Dependency
    What data sources are required for your app?
    How much latency can you tolerate?

    How much control do you for operations?
    Can you find a on-prem solution that provide the updates and upgrades. Does the solution/infrastructure it sits on painless to also upgrade/patch
    Stackdriver is tied to account on GCP. You can’t share data across accounts.
    Do you have to intergate other on-prem monitoring or logging solutions.

    Last updated January 28, 2018


      The Wish List for Nutanix Centric Enterprise backups?

      I was asked today what I would look for for in a solution that was Nutanix focused for backup. Just quickly spit balling I came up with the following list:

      Backup vendor must use ABS to eliminate the needs for multiple proxies or solve this using a similar method.

      Backup vendor must support or show a road map of supporting AFS change file tracking? Minimum of supporting NAS backup

      Backup vendor must support backing up volume groups.

      Backup vendor must support backing up Nutanix snapshots that have been replicated from a remote site to a primary DC.

      Backup vendor must support Change Region Tracking for AHV. ESXi is a plus.

      Backup vendor must support synthetic full backups.

      Backup vendor must have native support for X APP, list your App like SQL.

      Backup vendor must have an open API.

      Got any others to add?