Mar
    12

    Docker Datacenter beta and Nutanix Kubernetes Volume Plugin

    I got to test Persistent Volume Claims with DDC 3.0 beta from Docker and the Nutanix Kubernetes Volume Plugin. The automatic clean up was pretty interesting and slightly scary if you didn’t know about it!

    https://next.nutanix.com/blog-40/nutanix-kubernetes-volume-plugin-for-on-demand-choice-27598

    Feb
    21

    AHV – Configuring VM to run a nested hypervisor

    Nutanix now provides limited support for nested virtualization, specifically nested KVM VMs in an AHV cluster as of AOS 5.5.0.4 with AHV-20170830.58. Enabling nested virtualization will disable live migration and high availability features for the nested VM. You must power off nested VMs during maintenance events that require live migration.

    To enable nested virtualization for a VM, run the following command on a CVM.

    $ acli vm.update "VM NAME" cpu_passthrough=true

    Feb
    20

    Changing the Timezone on the AHV Hosts

    NOTE: AOS 5.5 now sets AHV time to UTC, so use the below to change older releases.

    The timezone on an AHV host is by default set to Pacific Standard Time (PST), specifically to the Americas/Los_Angeles time configuration. You can change the time zone on all the AHV hosts in the cluster to your preferred time zone. Changing the timezone is safe in a production AHV host because this does not touch the system clock, but instead only references the system clock with the symbolically linked timezone.

    Perform the following procedure to change the time zone on an AHV host:

    Log on to a Controller VM with SSH by using your user credentials.
    Locate the template file for your time zone by using one of the following commands:
    nutanix@cvm$ ssh root@192.168.5.1 ls -F \
    /usr/share/zoneinfo

    OR

    nutanix@cvm$ ssh root@192.168.5.1 ls -F \
    /usr/share/zoneinfo/*

    Your time zone might be stored at the root level of zoneinfo or within a subdirectory. For example:
    Japan
    Europe/London
    America/Los_Angeles

    Create a backup of /etc/localtime, and then create a symbolic link to the appropriate template file. For example, if you want to set the time zone to UTC, run the following command:
    nutanix@cvm$ allssh 'ssh root@192.168.5.1 "date; \
    ls -l /etc/localtime; \
    mv /etc/localtime /etc/localtime.org; \
    ln -s /usr/share/zoneinfo/UTC /etc/localtime; \
    date; ls -l /etc/localtime"'

    Feb
    09

    NFS v4 to Enable Acropolis File Services

    Acropolis File Services (AFS) is a software-defined, scale-out file storage solution that provides a repository for unstructured data, such as home directories, user profiles, departmental shares, application logs, backups, and archives. Flexible and responsive to workload requirements, AFS is a fully integrated, core component of the Nutanix Enterprise Cloud Platform. At both of our .Next User conferences in Washington, D.C. and Nice France, NFS support for AFS was highlighted as a new feature to be added along with the current SMB support in an upcoming release.

    NFS has been around almost as long as I have been breathing air as an eighties baby. Being an open standard, NFS has evolved over the years and now has different versions available. In most cases the version used is driven by the client that will be accessing the server. To this end Nutanix is entering the NFS space first with support for version 4 to go along with the current SMB support. NFS v4 is stable and has been going thru iterations since the 2000s. Most recent distributions of various platforms like Linux (CentOS, Ubuntu), Solaris, AIX use NFS v4 as the default client protocol and additional attention to security made it great easy choice.

    More at the link below.

    Full article posted on the Next Community Site

    Feb
    05

    Demo: ESXi Backup with Nutanix Snapshots with HYCU

    In addition to the support of Nutanix clusters that use Nutanix native AHV hypervisors, HYCU introduces the support for Nutanix environments that use VMware ESXi hypervisors. By using the native Nutanix storage layer snapshot technology, VMware snapshot stuns are avoided.

    Jan
    31

    HYCU – Backing up virtual machines from Replicated Nutanix Snapshots

    In remote office/branch office (ROBO) environments, HYCU allows you to back up virtual machines from their replicas on the central site Nutanix cluster. Backing up data from replicas without having to transfer the virtual machine data twice frees up WAN bandwidth for other business processes.

    To be able to back up virtual machines from their replicas, make sure that the replication retention on the Nutanix cluster is adjusted to the backup policy’s RPO. This allows HYCU to use the Changed Region Tracking (CRT) feature to get a list of changed data since the last snapshot and perform an incremental backup. For example, if the Nutanix schedule interval is two hours and the RPO of the HYCU backup policy is eight hours, the retention policy for the remote site must be set to 4 or more snapshots (that is, at least the last four snapshots must be kept).

      Jan
      28

      On-Prem or Cloud Hosted Kubernetes?

      Should I host by Kubernetes cluster in the Cloud or keep it with in my own data center? Nutanix Calm can deploy on-prem or to a Cloud provider so what should I do?

      Some questions I was thinking about when talking to peers. I would love this list to be builted overtime.

      Connectivity
      Do you require a connection back to the corporate network?
      Where will your logging and management be located? Sending data back to a corp site needs to be factored into the cost.

      Data Dependency
      What data sources are required for your app?
      How much latency can you tolerate?

      Control
      How much control do you for operations?
      Can you find a on-prem solution that provide the updates and upgrades. Does the solution/infrastructure it sits on painless to also upgrade/patch
      Stackdriver is tied to account on GCP. You can’t share data across accounts.
      Do you have to intergate other on-prem monitoring or logging solutions.

      Last updated January 28, 2018

        Jan
        19

        The Wish List for Nutanix Centric Enterprise backups?

        I was asked today what I would look for for in a solution that was Nutanix focused for backup. Just quickly spit balling I came up with the following list:

        Backup vendor must use ABS to eliminate the needs for multiple proxies or solve this using a similar method.

        Backup vendor must support or show a road map of supporting AFS change file tracking? Minimum of supporting NAS backup

        Backup vendor must support backing up volume groups.

        Backup vendor must support backing up Nutanix snapshots that have been replicated from a remote site to a primary DC.

        Backup vendor must support Change Region Tracking for AHV. ESXi is a plus.

        Backup vendor must support synthetic full backups.

        Backup vendor must have native support for X APP, list your App like SQL.

        Backup vendor must have an open API.

        Got any others to add?

        Jan
        15

        Prism Central with Self Service Portal – Cheat Notes

        The Prism Self Service feature represents a special view within Prism Central. While Prism Central enables infrastructure management across clusters, Prism Self Service allows end-users to consume that infrastructure in a self-service manner. Prism Self Service uses the resources provided by a single AHV cluster. (ESXi and Hyper-V are not supported platforms for Prism Self Service.)

          Nutanix recommends using the Chrome or Firefox browsers to deploy or install Prism Central (PC). Nutanix support has a KB if IE is the only allowed browser.
          In Prism Central 5.5, users that are part of nested groups cannot log on to the Prism Central web console.
          Always upgrade PC before Prism Element(your clusters)
          Want longer retention, go with a bigger PC instance due to the larger disk size.
          Prism Central and its managed clusters are not supported in environments deploying Network Address Translation (NAT).
          Best Practice to keep NNC on all managed cluster the same
          As of Prism Central 5.5, only User Principal Name (UPN) credentials are accepted for logon. the admin user must log on and specify a service account for the directory service in the Authentication Configuration dialog box before authentication for other users can start working.
          Name servers are computers that host a network service for providing responses to queries against a directory service, such as a DNS server. Changes in name server configuration may take up to 5 minutes to take effect. Functions that rely on DNS may not work properly during this time. If Prism Central is running on Hyper-V, you must specific the IP address of the Active Directory Domain Controller server, not the hostname. Do not use DNS hostnames or external NTP servers.
          Three primary roles when configuring Prism Self Service

            Prism Central administrator
            Self-service administrator
            Project user

          Prism Central administrator. The Prism Central administrator enables Prism Self Service and creates one or more self-service administrators. Prism Central administrators also create VMs, images, and network configurations that may be consumed by self-service users.

          Self-service administrator. The self-service administrator performs the following tasks:
          Creates a project for each team that needs self-service and adds Active Directory users and groups to the projects.
          Configures roles for project members.
          Publishes VM templates and images to the catalog.
          Monitors resource usage by various projects and its VMs and members, and then adjusts resource quotas as necessary.
          A Prism Central administrator can also perform any of theses tasks, but they are normally delegated to a self-service administrator.
          Self-service administrators have full access to all VMs running on the Nutanix cluster, including infrastructure VMs not tied to a project. Self-service administrators can assign infrastructure VMs to project members, add them to the catalog, and delete them even if they do not have administrative access to Prism Central.


        Setting Up AD with SSP

          Users with the “User must change password at next logon” attribute enabled will not be able to authenticate to Prism Central. Ensure users with this attribute first login to a domain workstation and change their password prior to accessing Prism Central. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
          Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
          ldap://ad_server.mycompany.com:389
          Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
          ldaps://ad_server.mycompany.com:636
          Port 3268 (LDAP – GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
          Port 3269 (LDAPS – GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
          With in a project:Allow collaboration: Check the box to allow any group member to see the VMs, applications, and other objects created by other members of the group. If this box is not checked, group members can see only the objects they create. The role assigned a group member determines the permissions that user has on objects created by other group members.
          Role Mapping – Prism matches AD group name using case sensitive checks, so if the group name defined under the role mapping in Prism has difference in the upper/lower characters than how it is defined in the AD, Prism will fail to perform the name mapping for the group.

          Ensure also that the customer is adding the “@domain_name” to the username when he is logging to PRISM central.

        Dec
        20

        Docker Swarm with Nutanix Calm

        Review -> What is Nutanix CALM?

        Nutanix Calm provides a set of pre-seeded application blueprints that are available to you for consumption.

        Docker Swarm is a clustering and scheduling tool for Docker containers. Lots of hype with Kubernetes right now and rightly so but Swarm is a great tool and still getting better. One of the blueprints available with Calm is Docker Swarm. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system. Swarm mode also exists natively for Docker Engine, the layer between the OS and container images. Swarm mode integrates the orchestration capabilities of Docker Swarm into Docker Engine. For AHV, by default blueprint creates 3 Master VMs with 2 Core, 4GB RAM, Root disk – 10GB, and Data Disk 3x10GB. For AWS, by default blueprint create 3 Slave VMs t2.medium, and Data Disk 3x10GB.


        Installed Version- Docker - 17.09.0.ce

        Variables

        DOCKER_VERSION - (Mandatory) Docker version default.
        INSTANCE_PUBLIC_KEY - (Mandatory) Instance public key (only for AHV).
        Click Marketplace tab.
        Click the Docker Swarm blueprint application.
        Click Launch.
        The blueprint application launch page is displayed.

        Enter a name for the application in the Name of the Application field. For the application blueprint naming conventions, see Launching an Application Blueprint.
        Select the Application profile.
        If the application profile is Nutanix then do the following.
        (Optional) Change the VM name.
        (Optional) Change the number of vCPUs and RAM.
        Select the NIC from the drop-down menu.
        Download the CentOS 7 from the repository.
        Enter the private key.
        If the application profile is AWS then do the following.
        (Optional) Change the VM name.
        Select the instance type.
        Select a CentOS 7 image as per the region and AZ.
        Select the VPC and subnet.
        Ensure that the security groups have access of ICMP port so that master and slave nodes can ping each other.

        Select the SSH keys.
        Repeat the above steps for docker slave services.