Jul
13

First HCI Vendor in the Garnter Magic Quadrant with Native Local Key Manager, New in AOS 5.8

With the release of AOS 5.8 Nutanix brings to market the first native key manager for a HCI that goes beyond using local server management tools. To help reduce cost and complexity Nutanix added a native Local Key Manager(LKM) for all three node clusters and above. The local Key manager runs as a service distributed among all of the nodes. It is easily activated from within prism element, so all customers can enable encryption without yet another silo to manage. Customers looking to simplify their infrastructure operations can now have one click infrastructure for their key manager as well.

Usually External Key Managers (EKM) need to be purchased separately for software and hardware costs. Since the Nutanix LKM is running natively within the controller virtual machine(CVM) it’s highly available and there is not a variable add-on pricing based on the number of nodes. Every time you add a node you know the final cost. There is also peace of mind when you go to upgrade your cluster that the key management services are also going to be upgraded. By both having the infrastructure and management services upgraded in lockstep you’re assured of your security posture and availability by staying in line with the support matrix.

The native LKM service uses the FIPS 140 Crypto module to keep all of the data encryption keys safe. Data is encrypted using a data encryption key(DEK). There is DEK used for every storage container. The DEK is typically then encrypted by a Key Encryption Key (KEK) that is sent to a EKM. Know that Nutanix supports it’s own native LKM, we take the KEK and wraps a 256 bit encryption key called the Machine Encryption Key (MEK). The MEK is distributed amongst all of the CVMs in the cluster using a splitting algorithm. No separate virtual machines are needed to support the native LKM.

Since the MEK is shared, each node can read what others have written. For the keys to reconstruct a majority of the nodes need to be present. We use K = Ceiling(N/2) to determine the majority of nodes. So in a 11 node cluster we would need six nodes to un-encrypt the data where N is 11.

EKM and LKM work flows

Nutanix also provides an easy way to back up your Data Encryption Keys (DEK) from Prism. There will be DEK for each storage container. If a new storage container is created an alert will be generated encouraging administrators to take a backup. The backup is password protected and should be securely stored. With the backup in hand, if catastrophic event happened in your data centre you could replicate the data back and reimport the backup keys to get your environment up and running.

Data

Nutanix Backup for DEK

Is Nutanix Local Key Manager is another step towards enabling security for everyone. Stay safe and if you have questions please drop them in the comments.

Jun
25

RHEL 7 STIG Implementation in Nutanix CVM #Security

Nutanix leverages SaltStack and SCMA to self-heal any deviation from the security baseline configuration of the operating system and hypervisor to remain in compliance. If any component is found as non-compliant, then the component is set back to the supported security settings without any intervention. To achieve this objective, Nutanix has implemented the Controller VM to support STIG compliance with the RHEL 7 STIG as published by DISA. Acropolis Operating System (A)S) 5.1 was the last version that we published our own STIGs for AOS. 5.5.3+ and 5.6+ are aligned to the RHEL 7 STIG.

The Nutanix platform and all products leverage the Security Configuration Management Automation (SCMA) framework to ensure that services are constantly inspected for variance to the security policy. Nutanix has implemented security configuration management automation (SCMA) to check multiple security entities for both Nutanix storage and AHV. Nutanix automatically reports log inconsistencies and reverts them to the baseline. With SCMA, you can schedule the STIG to run hourly, daily, weekly, or monthly. STIG has the lowest system priority within the virtual storage controller, ensuring that security checks do not interfere with platform performance.

The STIG rules are capable of securing the boot loader, packages, file system, booting and service control, file ownership, authentication, kernel, and logging.

Example: STIG rules for Authentication
Prohibit direct root login, lock system accounts other than root, enforce several password maintenance details, cautiously configure SSH, enable screen-locking, configure user shell defaults, and display warning banners.

Nutanix has taken this ownership on, not to put the burden on our customers to take the responsibility for securing our own product. You’re only as secure as the last time you checked.

Feb
21

AHV – Configuring VM to run a nested hypervisor

Nutanix now provides limited support for nested virtualization, specifically nested KVM VMs in an AHV cluster as of AOS 5.5.0.4 with AHV-20170830.58. Enabling nested virtualization will disable live migration and high availability features for the nested VM. You must power off nested VMs during maintenance events that require live migration.

To enable nested virtualization for a VM, run the following command on a CVM.

$ acli vm.update "VM NAME" cpu_passthrough=true

Feb
20

Changing the Timezone on the AHV Hosts

NOTE: AOS 5.5 now sets AHV time to UTC, so use the below to change older releases.

The timezone on an AHV host is by default set to Pacific Standard Time (PST), specifically to the Americas/Los_Angeles time configuration. You can change the time zone on all the AHV hosts in the cluster to your preferred time zone. Changing the timezone is safe in a production AHV host because this does not touch the system clock, but instead only references the system clock with the symbolically linked timezone.

Perform the following procedure to change the time zone on an AHV host:

Log on to a Controller VM with SSH by using your user credentials.
Locate the template file for your time zone by using one of the following commands:
nutanix@cvm$ ssh root@192.168.5.1 ls -F \
/usr/share/zoneinfo

OR

nutanix@cvm$ ssh root@192.168.5.1 ls -F \
/usr/share/zoneinfo/*

Your time zone might be stored at the root level of zoneinfo or within a subdirectory. For example:
Japan
Europe/London
America/Los_Angeles

Create a backup of /etc/localtime, and then create a symbolic link to the appropriate template file. For example, if you want to set the time zone to UTC, run the following command:
nutanix@cvm$ allssh 'ssh root@192.168.5.1 "date; \
ls -l /etc/localtime; \
mv /etc/localtime /etc/localtime.org; \
ln -s /usr/share/zoneinfo/UTC /etc/localtime; \
date; ls -l /etc/localtime"'

Feb
09

NFS v4 to Enable Acropolis File Services

Acropolis File Services (AFS) is a software-defined, scale-out file storage solution that provides a repository for unstructured data, such as home directories, user profiles, departmental shares, application logs, backups, and archives. Flexible and responsive to workload requirements, AFS is a fully integrated, core component of the Nutanix Enterprise Cloud Platform. At both of our .Next User conferences in Washington, D.C. and Nice France, NFS support for AFS was highlighted as a new feature to be added along with the current SMB support in an upcoming release.

NFS has been around almost as long as I have been breathing air as an eighties baby. Being an open standard, NFS has evolved over the years and now has different versions available. In most cases the version used is driven by the client that will be accessing the server. To this end Nutanix is entering the NFS space first with support for version 4 to go along with the current SMB support. NFS v4 is stable and has been going thru iterations since the 2000s. Most recent distributions of various platforms like Linux (CentOS, Ubuntu), Solaris, AIX use NFS v4 as the default client protocol and additional attention to security made it great easy choice.

More at the link below.

Full article posted on the Next Community Site

Feb
05

Demo: ESXi Backup with Nutanix Snapshots with HYCU

In addition to the support of Nutanix clusters that use Nutanix native AHV hypervisors, HYCU introduces the support for Nutanix environments that use VMware ESXi hypervisors. By using the native Nutanix storage layer snapshot technology, VMware snapshot stuns are avoided.

Jan
19

The Wish List for Nutanix Centric Enterprise backups?

I was asked today what I would look for for in a solution that was Nutanix focused for backup. Just quickly spit balling I came up with the following list:

Backup vendor must use ABS to eliminate the needs for multiple proxies or solve this using a similar method.

Backup vendor must support or show a road map of supporting AFS change file tracking? Minimum of supporting NAS backup

Backup vendor must support backing up volume groups.

Backup vendor must support backing up Nutanix snapshots that have been replicated from a remote site to a primary DC.

Backup vendor must support Change Region Tracking for AHV. ESXi is a plus.

Backup vendor must support synthetic full backups.

Backup vendor must have native support for X APP, list your App like SQL.

Backup vendor must have an open API.

Got any others to add?

Jan
15

Prism Central with Self Service Portal – Cheat Notes

The Prism Self Service feature represents a special view within Prism Central. While Prism Central enables infrastructure management across clusters, Prism Self Service allows end-users to consume that infrastructure in a self-service manner. Prism Self Service uses the resources provided by a single AHV cluster. (ESXi and Hyper-V are not supported platforms for Prism Self Service.)

    Nutanix recommends using the Chrome or Firefox browsers to deploy or install Prism Central (PC). Nutanix support has a KB if IE is the only allowed browser.
    In Prism Central 5.5, users that are part of nested groups cannot log on to the Prism Central web console.
    Always upgrade PC before Prism Element(your clusters)
    Want longer retention, go with a bigger PC instance due to the larger disk size.
    Prism Central and its managed clusters are not supported in environments deploying Network Address Translation (NAT).
    Best Practice to keep NNC on all managed cluster the same
    As of Prism Central 5.5, only User Principal Name (UPN) credentials are accepted for logon. the admin user must log on and specify a service account for the directory service in the Authentication Configuration dialog box before authentication for other users can start working.
    Name servers are computers that host a network service for providing responses to queries against a directory service, such as a DNS server. Changes in name server configuration may take up to 5 minutes to take effect. Functions that rely on DNS may not work properly during this time. If Prism Central is running on Hyper-V, you must specific the IP address of the Active Directory Domain Controller server, not the hostname. Do not use DNS hostnames or external NTP servers.
    Three primary roles when configuring Prism Self Service

      Prism Central administrator
      Self-service administrator
      Project user

    Prism Central administrator. The Prism Central administrator enables Prism Self Service and creates one or more self-service administrators. Prism Central administrators also create VMs, images, and network configurations that may be consumed by self-service users.

    Self-service administrator. The self-service administrator performs the following tasks:
    Creates a project for each team that needs self-service and adds Active Directory users and groups to the projects.
    Configures roles for project members.
    Publishes VM templates and images to the catalog.
    Monitors resource usage by various projects and its VMs and members, and then adjusts resource quotas as necessary.
    A Prism Central administrator can also perform any of theses tasks, but they are normally delegated to a self-service administrator.
    Self-service administrators have full access to all VMs running on the Nutanix cluster, including infrastructure VMs not tied to a project. Self-service administrators can assign infrastructure VMs to project members, add them to the catalog, and delete them even if they do not have administrative access to Prism Central.


Setting Up AD with SSP

    Users with the “User must change password at next logon” attribute enabled will not be able to authenticate to Prism Central. Ensure users with this attribute first login to a domain workstation and change their password prior to accessing Prism Central. Also, if SSL is enabled on the Active Directory server, make sure that Nutanix has access to that port (open in firewall).
    Port 389 (LDAP). Use this port number (in the following URL form) when the configuration is single domain, single forest, and not using SSL.
    ldap://ad_server.mycompany.com:389
    Port 636 (LDAPS). Use this port number (in the following URL form) when the configuration is single domain, single forest, and using SSL. This requires all Active Directory Domain Controllers have properly installed SSL certificates.
    ldaps://ad_server.mycompany.com:636
    Port 3268 (LDAP – GC). Use this port number when the configuration is multiple domain, single forest, and not using SSL.
    Port 3269 (LDAPS – GC). Use this port number when the configuration is multiple domain, single forest, and using SSL.
    With in a project:Allow collaboration: Check the box to allow any group member to see the VMs, applications, and other objects created by other members of the group. If this box is not checked, group members can see only the objects they create. The role assigned a group member determines the permissions that user has on objects created by other group members.
    Role Mapping – Prism matches AD group name using case sensitive checks, so if the group name defined under the role mapping in Prism has difference in the upper/lower characters than how it is defined in the AD, Prism will fail to perform the name mapping for the group.

    Ensure also that the customer is adding the “@domain_name” to the username when he is logging to PRISM central.

Dec
20

Docker Swarm with Nutanix Calm

Review -> What is Nutanix CALM?

Nutanix Calm provides a set of pre-seeded application blueprints that are available to you for consumption.

Docker Swarm is a clustering and scheduling tool for Docker containers. Lots of hype with Kubernetes right now and rightly so but Swarm is a great tool and still getting better. One of the blueprints available with Calm is Docker Swarm. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system. Swarm mode also exists natively for Docker Engine, the layer between the OS and container images. Swarm mode integrates the orchestration capabilities of Docker Swarm into Docker Engine. For AHV, by default blueprint creates 3 Master VMs with 2 Core, 4GB RAM, Root disk – 10GB, and Data Disk 3x10GB. For AWS, by default blueprint create 3 Slave VMs t2.medium, and Data Disk 3x10GB.


Installed Version- Docker - 17.09.0.ce

Variables

DOCKER_VERSION - (Mandatory) Docker version default.
INSTANCE_PUBLIC_KEY - (Mandatory) Instance public key (only for AHV).
Click Marketplace tab.
Click the Docker Swarm blueprint application.
Click Launch.
The blueprint application launch page is displayed.

Enter a name for the application in the Name of the Application field. For the application blueprint naming conventions, see Launching an Application Blueprint.
Select the Application profile.
If the application profile is Nutanix then do the following.
(Optional) Change the VM name.
(Optional) Change the number of vCPUs and RAM.
Select the NIC from the drop-down menu.
Download the CentOS 7 from the repository.
Enter the private key.
If the application profile is AWS then do the following.
(Optional) Change the VM name.
Select the instance type.
Select a CentOS 7 image as per the region and AZ.
Select the VPC and subnet.
Ensure that the security groups have access of ICMP port so that master and slave nodes can ping each other.

Select the SSH keys.
Repeat the above steps for docker slave services.

    Dec
    15

    Nutanix Calm Blueprints Overview

    Nutanix Calm Overview

    A blueprint is the framework for every application that you model by using Nutanix Calm. Blueprints are templates that describe all the steps that are required to provision, configure, and execute tasks on the services and applications that are created. You can create a blueprint to represent the architecture of your application and then run the blueprint repeatedly to create an instance, provision, and launch your applications. A blueprint also defines the lifecycle of an application and its underlying infrastructure starting from the creation of the application to the actions that are carried out on a blueprint until the termination of the application.

    You can use blueprints to model the applications of various complexities; from simply provisioning a single virtual machine to provisioning and managing a multi-node, multi-tier application.

    Blueprint editor provides a graphical representation of various components that enable you to visualize and configure the components and their dependencies in your environment.

    repeatable and auditable automation