Sep
16

Serve Files with Enterprise Cloud Agility, Security, and Availability with Acropolis File Services

afs

Nutanix continues on its Enterprise Cloud journey at the .NEXT On-Tour event in Bangkok, Thailand. Today, we are proud to announce that we are planning to support Acropolis File Services (AFS) on our storage only nodes, the NX-6035C-G5. Acropolis File Services provides a simple and scalable solution for hosting user and shared department files across a centralized location with a single namespace. With Acropolis File Services, administrators no longer waste time with manual configuration or need Active Directory and load balancing expertise. If and when released, this will make 6035C-G5 nodes even more versatile, adding to the current capabilities of serving as a backup or replication target and running Acropolis Block Services.

[read more]

Aug
07

Battle Royale: View Composer VS Instant-Clones – Deploy

Horizon 7 added Instant-Clones with the ability to clone a full desktop in 4-5 secs. What is the catch? Not really a catch, but no explanation that it takes a bit of time to prep the desktops. For testing purposes, I decided to clone 100 desktops with View Composer and 100 desktops with Instant Clones.

For these tests I used NX 3460-G4, Win 10, 2 vCPU, 2 GB of RAM

Impact of cloning 100 desktops with View Composer

100View5

You can see hypervisor IOPS and disk IOPS. The impact is really shown on what is happening on the backend and CPU used to create the desktops. So roughly 16,000 IOPS to create the desktops with Composer.

Impact of cloning 100 desktops with Instant-Clones

instant-clone1009
You can see an initial bump in IOPS due to the replica that has to be copied without VAAI. The replica also has to get fingerprinted with does take some time. In my testing it took about eight minutes. The reduction in IOPS is amazing. While you still need performance for running the desktops, you don’t have to worry about provisioning destroying your performance. Disk IOPS was ~ only 1200 IOPS at its peak.

Summary VC vs Instant Clone

Deploy 100 Desktops
View Composer: 5 min
Instant Clone: 14 min —– virtual disk digest – 8.22 min
—– Clone 100 desktops 1.4 min

While the overall process took longer the impact is a lot better with Instant-Clones. With hundreds of desktops Instant-Clones is powerful tool to have in your back pocket. Once Instant-Clones gets GPU support I think they will really take off as the default choice. If you have related questions to performance I encourage you to talk to your Nutanix SE and they can get put you in touch with the Solution and Performance Team at Nutanix.

Related Articles

Tale of Two Lines

Jul
19

Securing the Supply Chain with Nutanix and Docker #dockercon2016

I was watching the below video from DockerCon 2016 and there was lots of striking similarities between what Nutanix and Docker is doing secure working environment for the Enterprise Cloud. There is no sense turning the alarm on for your house and then not locking the doors. You need to close all the gaps for your infrastructure and the applications that live on top of it.

The most interesting part of the session for me was the section on security scanning and gating. Docker has Security Scanning which is available as an add-on to Docker hosted private repositories on both Docker Cloud and Docker Hub. Scans run each time a build pushes a new image to your private repository. They also run when you add a new image or tag. Most scans complete within an hour, however large repositories may take up to 24 hours to scan. The scan traverses each layer of the image, identifies the software components in each layer, and indexes the SHA of each component.
docker-scanniing
The scan compares the SHA of each component against the Common Vulnerabilities and Exposures (CVE) database. The CVE is a “dictionary” of known information security vulnerabilities. When the CVE database is updated, the service reviews the indexed components for any that match the new vulnerability. If the new vulnerability is detected in an image, the service sends an email alert to the maintainers of the image.

A single component can contain multiple vulnerabilities or exposures and Docker Security Scanning reports on each one. You can click an individual vulnerability report from the scan results and navigate to the specific CVE report data to learn more about it.

On the Nutanix side of the fence all code is scanned with 2 different vulnerability scanners at every step of the development life-cycle. To top that off Nutanix already apply s an intrinsic baseline, and we already monitor and self-heal that baseline with SCMA the Security Configuration Management Automation and leverage the SaltStack framework so that your production systems can Self-Heal from any deviation and are always in compliance. Features like two factor authentication (2FA) and cluster lockdown further enhance the security posture. The cluster-wide setting can forward all logs to a central host as well. All CVEs related to the product are tracked and provide an internal turn around time of 72 hours for critical patches! There is some added time on getting a release cut but it fast and everything is tested as whole instead of a one off change that could have a domino a effect.

When evaluating infrastructure and development environments for a security-conscious environment, it’s imperative to choose one that is built with a security-first approach that continually iterate on patching new threats thereby reducing the attack surface. Docker is doing some great work on this front.

    Jul
    14

    Nutanix Acropolis File Services – Required 2 Networks

    When configuring Acropolis File Services you may be prompted with the following message:

    “File server creation requires two unique networks to be configured beforehand.”

    The reason is you two managed networks for AFS. I’ve seen this come up a lot lately so I thought I would explain the why. While it may change over time this is the current design.

    fs-tor

    The above diagram shows one file server VM running on a node, but you can put multiple file server VMs on a node for multitenancy.

    The file server VM has two network interfaces. The first interface is a static address used for the local file server VM service that talks to the Minerva CVM service running on the Controller VM. The Minerva CVM service uses this information to manage deployment and failover; it also allows control over one-click upgrades and maintenance. Having local awareness from the CVM enables the file server VM to determine if a storage fault has occurred and, if so, if action should be taken to rectify it. The local address also lets the file server VM claim vDisks for failover and failback. The file server VM service sends a heartbeat to its local Minerva CVM service each second, indicating its state and that it’s alive.
    The second network interface on the file server VM, also referred to as the public interface, allows clients to service SMB requests. Based on the resource called, the file server VM determines whether to service the request locally or to use DFS to refer the request to the appropriate file server VM that owns the resource. This second network can be dynamically reassigned to other file server VM’s for high availability.

    If you need help setting up the two managmed networks there is KB article on portal.nutanix.com -> KB3406

    Jul
    13

    Backing up AFS with Commvault

    This is by no means a best practice guide for AFS and Commvault but I wanted to make sure that Commvault could be used to backup Acropolis File Services (AFS). If want more details on AFS I suggest reading this great post on the Nutanix website.

    Once I applied the file server license to CommServe I was off to the races. I had 400 users spread out on 3 file server VMs making up the file server called eucfs.tenanta.com. The file server had two shares but I was focused on backing up the user share.

    commvault-afs-users

    400users

    I found performance could be increased by adding more readers for the backup job. My media agent was last configured with 8 vCPU and it seemed to be the bottleneck. If I were to give the media agent more CPU I am sure I would have had a even faster backup time.

    commvault-readers-nutaix-afspng

    I was able to get almost 600 GB/Hour which I am told is a good number for file backup. There looks like there is lots of room to improve though. The end goal will be to try and backup a million files and see what happens over the course of time.

    600

    Like all good backup stories, it’s all about the restores and it appears to drill down real nice.

    commvault-users-restore-afs4

    Jul
    12

    Just In Time Desktops (Instant Clones) on Nutanix

    JIT desktops are supported on Nutanix. One current limitation of JIT is that it doesn’t support VAAI for NFS Hardware Clones. The great part for Nutanix customers is that were VAAI clones stop, shadow clones kicks into affect! So if you want to keep a lower amount of RAM for configured for the View Storage Accelerator your perfectly OK in doing that.

    The Nutanix Distributed Filesystem has a feature called ‘Shadow Clones’ which allows for distributed caching of particular vDisks or VM data which is in a ‘multi-reader’ scenario. A great example of this is during a VDI deployment many ‘linked clones’ will be forwarding read requests to a central master or ‘Base VM’. In the case of VMware View this is called the replica disk and is read by all linked clones. This will also work in any scenario which may be a multi-reader scenario (eg. deployment servers, repositories, App Volumes, etc.)

    You can read more about Shadow CLones in this Tech Note -> HERE

    An Introduction to Instant Clones -> HERE

      Jul
      06

      Chad Sakac talks about EMC selling Nutanix with Dell Technologies

      What will happen with Dell XC when EMC and Dell come together? Chad Sakac talks about it at the 18:40 mark from the ThinkAhead IT conference.

      From NextConf 2016
      Nutanix and Dell OEM relationship: Dell’s Alan Atkinson spoke to attendees about extending the OEM relationship and continuing to help our joint customers (including Williams) on their journeys to Enterprise Cloud in confidence.

      May
      15

      Commvault Best Practices on Nutanix

      I first remember seeing Commvault in 2007 in the pages of Network World and thought it looked pretty interesting then. At the time I was an CA ARCserve junky and prayed everyday I didn’t have to restore anything. Almost 10 years latter tape is still around, virtualization spawned countless backup vendors and Cloud now makes a easy backup target. Today Commvault is still relevant and plays in all of the aforementioned spaces and like most tech companies we have our own overlap with them to some degree. For me Commvault just has so many options it’s almost a problem of what to use where and when.

      The newly released Best Practice Guide with Commvault talks about some of the many options that should be used with Nutanix. Probably the big things that would stand out in my mind if I was new to Nutanix and then read the guide would be the use of a proxy on every host and some of the caveats around Intellisnap.

      Proxy On Every Host
      bricks-vs-feathers

      What weights more? A pound of feathers or a pound of bricks? The point here is you need a proxy regardless and the proxy is sized on how much data you will be backing up. So instead of having 1 giant proxy you now have smaller proxies that are distributed across the cluster. Smaller proxies can read from local Hot SSD tier and limit network traffic so they can help to limit bottlenecks in your infrastructure.

      IntelliSnap is probaly one of the most talked about Commvault features. IntelliSnap allows you to create a point-in-time application-consistent snapshot of backup data on the DSF. The backup administrator doesn’t need to log on to Prism to provide this functionality. A Nutanix-based snapshot is created on the storage array as soon as the VMware snapshot is completed; the system then immediately removes the VMware snapshot. This approach minimizes the size of the redo log and shortens the reconciliation process to reduce the impact on the virtual machine being backed up and minimize the storage requirement for the temporary file. It also allows near-instantaneous snapshot mounts for data access.

      With IntelliSnap it’s important to realize that it was invented at a time where LUNS ruled the storage workload. IntelliSnap in some sense turns Nutanix’s giant volumes/containers the hypervisors sees into a giant LUN. Behind the scenes when Intellisnap is used it snaps the whole container regardless if the VMs are being backed up or not. So you should do a little planning when using IntelliSnap. This is ok since IntelliSnap should be used for high transnational VMs and not every VM in the data center. I just like to point out that streaming backups with CBT is still a great choice.

      With that being said you can checkout the full guide at the Nutanix Website: Commvault Best Practices

      Apr
      17

      Quickly Pin Your Virtual Hard Drive To Flash #vExpert #NTC

      If you need to ensure performance with Flash Mode here is a quick way to get your job done.

      Find the disk UUID
      ncli virtual-disk ls | grep -B 3 -A 6

      pin-flash1

      Example
      ncli virtual-disk ls | grep m1_8 -B 3 -A 6

      Virtual Disk Id : 00052faf-34c2-58fc-64dd-0cc47a673b8c::313a49:6000C29b-93c9-bfe1-58d9-e718993e5a06
      Virtual Disk Uuid : 1dc11a7f-63ac-422a-ac27-442d5fcfc91a
      Virtual Disk Path : /hdfs/cdh-m1/cdh-m1_8.vmdk
      Attached VM Name : cdh-m1
      Cluster Uuid : 00052faf-34c2-58fc-64dd-0cc47a673b8c
      Virtual Disk Capacity : 268435456000
      Pinning Enabled : Flase

      Set 25 GB to pin to flash of the vdisk
      ncli virtual-disk update-pinning id=00052faf-34c2-58fc-64dd-0cc47a673b8c::313a49:6000C29b-93c9-bfe1-58d9-e718993e5a06 pinned-space=25 tier-name=SSD-SATA

      Pinned Space is in GB.

      In this case I was pinning a Hadoop NameNode directories to flash because I wanted to include their physical node in the cluster to help with replication traffic.

      Mar
      09

      App Volumes 2.10 Best Practices

      I know App Volumes 3.0 is around the corner but I had to track this information down for 2.10 for a customer.

      App Volume Manager Best Practices
      2 App Volume Managers minimum – 3 for Resiliency for 2,000 users
      Load Balancer in Production
      Cluster SQL Server

      AppStack Best Practices
      1 AppStack per 1000 attachments
      Up to 15 AppStacks Volumes per VM
      2,000 users per Manager
      Timeout is 3 minutes each per Writeable volume and then AppStack