Archives for December 2017


Docker Swarm with Nutanix Calm

Review -> What is Nutanix CALM?

Nutanix Calm provides a set of pre-seeded application blueprints that are available to you for consumption.

Docker Swarm is a clustering and scheduling tool for Docker containers. Lots of hype with Kubernetes right now and rightly so but Swarm is a great tool and still getting better. One of the blueprints available with Calm is Docker Swarm. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system. Swarm mode also exists natively for Docker Engine, the layer between the OS and container images. Swarm mode integrates the orchestration capabilities of Docker Swarm into Docker Engine. For AHV, by default blueprint creates 3 Master VMs with 2 Core, 4GB RAM, Root disk – 10GB, and Data Disk 3x10GB. For AWS, by default blueprint create 3 Slave VMs t2.medium, and Data Disk 3x10GB.

Installed Version- Docker - 17.09.0.ce


DOCKER_VERSION - (Mandatory) Docker version default.
INSTANCE_PUBLIC_KEY - (Mandatory) Instance public key (only for AHV).
Click Marketplace tab.
Click the Docker Swarm blueprint application.
Click Launch.
The blueprint application launch page is displayed.

Enter a name for the application in the Name of the Application field. For the application blueprint naming conventions, see Launching an Application Blueprint.
Select the Application profile.
If the application profile is Nutanix then do the following.
(Optional) Change the VM name.
(Optional) Change the number of vCPUs and RAM.
Select the NIC from the drop-down menu.
Download the CentOS 7 from the repository.
Enter the private key.
If the application profile is AWS then do the following.
(Optional) Change the VM name.
Select the instance type.
Select a CentOS 7 image as per the region and AZ.
Select the VPC and subnet.
Ensure that the security groups have access of ICMP port so that master and slave nodes can ping each other.

Select the SSH keys.
Repeat the above steps for docker slave services.


    Nutanix Calm Blueprints Overview

    Nutanix Calm Overview

    A blueprint is the framework for every application that you model by using Nutanix Calm. Blueprints are templates that describe all the steps that are required to provision, configure, and execute tasks on the services and applications that are created. You can create a blueprint to represent the architecture of your application and then run the blueprint repeatedly to create an instance, provision, and launch your applications. A blueprint also defines the lifecycle of an application and its underlying infrastructure starting from the creation of the application to the actions that are carried out on a blueprint until the termination of the application.

    You can use blueprints to model the applications of various complexities; from simply provisioning a single virtual machine to provisioning and managing a multi-node, multi-tier application.

    Blueprint editor provides a graphical representation of various components that enable you to visualize and configure the components and their dependencies in your environment.

    repeatable and auditable automation


    What is Nutanix CALM?

    Nutanix Calm allows you to seamlessly select, provision, and manage your business applications across your infrastructure for both the private and public clouds. Nutanix Calm provides App lifecycle, monitoring and remediation to manage your heterogeneous infrastructure, for example, VMs or bare-metal servers. Nutanix Calm supports multiple platforms so that you can use the single self-service and automation interface to manage all your infrastructure. Nutanix Calm provides an interactive and user friendly Graphical User Interface (GUI) to manage your infrastructure.

    Features of Nutanix Calm

    Application Lifecycle Management: Automates the provision and deletion of both traditional multi-tiered applications and modern distributed services by using pre-integrated blueprints that make management of applications simple in both private (AHV) and public cloud (AWS).

    Customizable Blueprints: Simplifies the setup and management of custom enterprise applications by incorporating the elements of each app, including relevant VMs, configurations and related binaries into an easy-to-use blueprint that can be managed by the infrastructure team. More Info on Blueprints.

    Nutanix Marketplace:
    Publishes the application blueprints directly to the end users through Marketplace.

    Governance: Maintains control with role-based governance thereby limiting the user operations that are based on the permissions.

    Hybrid Cloud Management
    : Automates the provisioning of a hybrid cloud architecture, scaling both multi-tiered and distributed applications across cloud environments, including AWS.


    Enabling AHV Turbo on AOS 5.5

    Nutanix KB 4987

    From AOS 5.5, AHV Turbo replaces the QEMU SCSI data path in the AHV architecture for improved storage performance.

    For maximum performance, ensure the following on your Linux guest VMs:

    Enable the SCSI MQ feature by using the kernal command line:
    scsi_mod.use_blk_mq=y ( I put this in a /etc/udev/rules.d/)

    Kernels older than 3.17 do not support SCSI MQ.
    Kernels 4.14 or later have SCSI MQ enabled by default.
    For Windows VMs, AHV VirtIO drivers will support SCSI MQ in an upcoming release.

    AHV Turbo improves the storage data path performance even without the guest SCSI MQ support.


    Perform the following to enable AHV Turbo on AOS 5.5.

    Upgrade to AOS 5.5.
    Upgrade to the AHV version bundled with AOS 5.5.
    Ensure your VMs have SCSI MQ enabled for maximum performance
    Power cycle your VMs to enable AHV Turbo.

    Note that you do not have to perform this procedure if you upgrading from AOS 5.5 to a later release. AHV Turbo will be enabled by default on your VMs in that case.


    Running IT: Docker and Cilium for Enterprise Network Security for Micro-Services

    Well I think 40 min is about as long as I can last watching a IT related video while running after that I need music! This time I watched another video from DockerCon, Cilium – Kernel Native Security & DDOS Mitigation for Microservices with BPF

    Skip to 7:23: The quick overview of the presentation is that managing IP Tables to lock down micro-services isn’t going to scale and will be almost impossible to manage. Cilium is open source software for providing and transparently securing network connectivity and load balancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. BPF is used a lot of the big web-scale properties like Facebook and Netflix to secure their environment and to provide troubleshooting. Like anything with a lot of options there is a lot of ways to shoot yourself in the foot so Cilium provides the wrapper to get it easily deployed and configured.

    The presentation uses that example of locking down a Kafka cluster via layer 7 instead of having the whole API left wind open which would happen if your were only using IP tables. Kafka is used for building real-time pipelines and streaming apps. Kafka is horizontally scalable and fault-tolerant so it’s a good choice to run it in docker. Kakfa is used by 1/3 of Fortune 500 companies.

    Cilium Architecture

    Cilium Integrates with:


    Cilium runs as a agent on every host.
    Cilium can provide policy for Host to Docker micro-service and even between two containers on the same host.

    The demo didn’t pan out but the 2nd half of the presentation talks about Cilium using BPF with XDP. XDP is a further step in evolution and enables to run a specific flavor of BPF programs from the network driver with direct access to the packet’s DMA buffer. This is, by definition, the earliest possible point in the software stack, where programs can be attached to in order to allow for a programmable, high performance packet processor in the Linux kernel networking data path.

    Since XDP can happen earlier on at the nic versus iptables with ipset, CPU can be saved, rules load faster and latency under load is a lot better with XDP.


    Handling Network Partition with Near-Sync

    Near-Sync is GA!!!

    Part 1: Near-Sync Primer on Nutanix
    Part 2: Recovery Points and Schedules with Near-Sync

    Perform the following procedure, if network partition (network isolation) between the primary and remote site occurs.

    Following scenarios may occur if the network partition occurs.

    1.Network between primary site (site A) and remote site (site B) is restored and both the sites are working.
    Primary site tries to transition into NearSync automatically between site A and site B. No manual intervention is required.

    2.Site B is not working or destroyed (for whatever reason). If you create a new site (site C) and want to establish sub-hourly schedule from A to C.
    Configure sub-hourly schedule from A to C.
    The configuration between A to C should succeed. No other manual intervention is required.

    3.Site A is not working or destroyed (for whatever reason). If you create a new site (site C) and try to configure sub-hourly schedule from B to C.
    Activate the protection domain on site B and set up the schedule between site B and site C.


    Supported Anti-Virus Offload for Nutanix Native File Services(AFS)

    As the list grows with releases I will try to keep this updated.

    As of AFS 2.2.1 supported AV ICAP based vendors:

    McAfee Virus Scan Enterprise for Storage 1.2.0

    Symantec Protection Engine 7.9.0

    Kaspersky Security 10

    Sophos Antivirus

    Nutanix recommends the following file extensions for user profiles are added to the exclusion list when using the AFS Antivirus scanning:

    Symantec Pre-Req

    Each Symantec ICAP server needs the hot fix ( installed from

    Kaspersky Pre-Req
    When running the Database Update task with the network folder as an update source, you might encounter an error after entering credentials.


    To resolve, download and install the critical fix 13017 provided by Kaspersky

    Download Link: