Jun
    24

    Podcast Lollapalooza – EUC Podcast and Frontline Chatter

    Last week I was able to sneak onto two podcasts. End User Computing Podcast and the Frontline Chatter Chat. I have to report with deep sadness that the 2016 .Next Conference is NOT in Sydney, my love of rugby must have had the better of me. The 2016 .Next conference is in Las Vegas.

    EUC Podacst was mostly around Synergy and cloud management with a sprinkle of layering and the Frontline Chatter was great review on the announcements for .Next including what Acropolis is a whole.

    Listen to both today and let me know what you think.

    http://eucpodcaFrontLine-Chatter-Logo

    EUC_Podcast_iTunes_Large

    Jun
    23

    Journey to a Hybrid Cloud: Up and Down and All Around

    Coming off the heels of .NEXT and the announcement of Application Mobility Fabric (AMF) and thought I should really start to take a look at vCloud Air since it wasn’t a part of our announcement. Apart of AMF is moving workloads to cloud providers and it will abstract a lot of the mess that is involved in moving to different virtual machine formats. Going from a vSphere to vCloud Air, VMware has done a pretty good job at extending out the management. In reality, Nutanix fully supports going from on-prem vSphere to vCloud Air.

    I wanted move to a template from my vSphere management cluster to vCloud air and see how the life cycle management would pan out. So I set off and installed the vCloud Connect Server & node appliances on my private cluster. That was pretty easy as it was just deploying some OVA’s. I even just let the appliances run with DHCP and everything was fine.

    One stumbling I had was setting up the connection to vCloud Air from the vCloud Connect Server. It took me a bit to find my organization ID and as thought I only needed to add in the name of my virtual private data center.

    The name of the virtual private cloud was my red herring.

    The name of the virtual private cloud was my red herring.

    Once I figured out I actually had to go into the “Manage with vCloud Director” portion I found the proper ID I should have been entering in.

    Here I was able to find the organization name for vCloud Connect Server

    Here I was able to find the organization name for vCloud Connect Server configuration

    Maintaining different cloud environments may take some work without proper tooling for dev/test.

    Maintaining different cloud environments may take some work without proper tooling for dev/test.

    So I moved my template over, deployed a VM and then went looking on how I could reconfigure the workload. It was pretty easy and I didn’t have to re-deploy from my private environment. I started to look at the AWS process and it wasn’t as smooth for windows based systems. Installing AWS CLI and python to convert images so you could create an AMI (Amazon Machine Image) wasn’t great but do able. The exporting was also CLI based. I guess you have to have a love for CLI for AWS but that’s probably ok for people really into Dev/Test but something to consider. Depending on how you operate, you’ll have to possible recreate your AMI’s if you need to make changes. It might kinda end up being a game of snakes and letters if your comparing the two public clouds coming from a vSphere environment. There is a pile of information about the vCloud Air service here, pricing, user reviews and use cases.

    Free $300 vCloud  Air Credits

    Free $300 vCloud Air Credits

    VMware’s vCloud Air allows you to work in a hybrid sandbox; no need to bring down the latest production version of applications, configurations, workloads. Simply operate in the vCloud Air sandbox and use the latest updates. Nutanix fully supports vCloud Air and the AMF will be used to flatten some of the nuances of moving between clouds over time. I would encourage you to take a free test trail available on vCloud Air and see if fits your workflow.

    Jun
    22

    Nutanix Volume API and Containers #dockercon

    When I first heard that Nutanix was creating a Volume API to give access to virtual machines via iSCSI I thought it was just arming our customers another option in running MS Exchange. I was sitting in Acropolis training last week and it dawned on me how great this will be for containers and OpenStack. So what is Nutanix Volume API all about?

    The volumes API exposes back-end NDFS storage to guest operating system, physical hosts, and containers through iSCSI. iSCSI support allows any operating system to use the storage capabilities of NDFS. In this deployment scenario, the operating system works directly with Nutanix storage bypassing any hypervisor.

    Volumes API consists of the following entities:

    Volume group
    iSCSI target and group of disk devices.
    Disks
    Storage devices in the volume group (displayed as LUNs for the iSCSI target).
    Attachment
    Allowing a specified initiator IQN access to the volume group.

    The following image shows an example of a VM running on Nutanix with its operating system hosted on the Nutanix storage, mounting the volumes directly.

    volume1

    Now your OpenStack and containers instances can be blown away and your data will persist! I think this is a big plus for Nutanix and containers running on the Acropolis hypervisor. Future integration with Docker Plugins should now be easier.

    Jun
    19

    Tech Preview of Nutanix File Services

    A quick run through of what is cooking for Nutanix File Services. On Acropolis will be able to hide the file services VM’s in Prism.

    Jun
    16

    Nutanix PowerShell CMDLETS Reference Poster

    Head on over to nutanix.com get the new Nutanix PowerShell Reference Poster

    nutanix powershell

    Jun
    15

    Nutanix Tech Preview: Prism Management for Containers

    Below is a video of Prism being able to manage containers either virtual or based in AWS. Watch as containers are deployed on Acropolis and AWS.

    Jun
    15

    Consistent Performance and Availability with Node Based Architecture

    152682-coyote

    Everybody wants to talk about performance because it’s the sexy topic but it’s not the only decision point. You can have all the performance in the world but if you’re not highly availability it doesn’t really matter. If performance was the only factor we would all just run PCI based flash in a server and call it a day. With traditional storage, active / passive architectures are one decision to ensure performance. Yes they are wasting available performance most of the time but that doesn’t make it wrong, that was an architectural choice. In node based architecture consistent performance and availability have to put above drag racing numbers. In node based architecture the luckily hood of a node going down is simply mathematical higher. This is why Nutanix designed around distributing everything up front. It’s the ability to fail hard and fail fast and live to see the next failure. (Side note: This also why people talking about 64 node all flash clusters with a fault tolerance of 1 makes me chuckle)

    Some design decision points around Nutanix:

    * Nutanix decided to always write the minimum required two copies of data. Lots of other node based architecture will only write 1 copy if destination node for the remote copy is down or being upgraded. The trade off with Nutanix always auto-leveling and spreading the load probably cost more in terms of CPU but the performance is consistent and available. Big bulk sync operations don’t take place. You don’t have to manually migrate data around the country side.

    * Hot Data is the flash tier always has two copies in-case a node goes you don’t have to warm up the flash tier. Trade off is with space but inline compression can help in this area. Consistency was chosen over performance.

    * Secondary copies are not sent to static nodes. Spreading the load give better consistent performance and better rebuild times. Nutanix also chose to spread data at the vdisk level versus the VM.

    * Data Locality – The local copy of data helps with network congestion and fast read performance.

    * Automatic DRS is fully supported. Maintenance operations are going to happen. You don’t have to figure out the home node of the VM.

    * Nutanix rebuilds active data to the SSD tier and builds cold data to the HDD tier. Active data is quickly rebuilt and cold data is not impacting the performance during a rebuild.

    * Any VM can get access to all of the SSD resources in the cluster is the local SSD tier is full. We have some CPU cost in managing this with Apache Cassandra but it’s highly optimized. The benefit the working set can be larger than the flash\controllers of two nodes. Performance is not tied to Dedupe or compression yielding large results.

    * Self healing – As along as you have the capacity and enough nodes you can continually heal your self. Example having a 8 node cluster, lose a node, heal, lose a node, heal, lose a node, heal until you get down to three nodes. This one reason why the 6035c storage KVM node with the ability to be attached to ESXi and Hyper-V clusters is just awesome.

    * Access to all of the local resources. We allow multiple SSD and HDD to live in one storage pool. If data is going from SSD to HDD you have access to all 20 HDDs even though the host may have multiple SSD’s. Also the down tiering is not affected by a RAID write penalty.

    * HCL is worry free. – With pure software you have to worry about the hypervisor and then the manufacturer to see what both are going to support. Both sides can change and then you can be left scratching your head on what to do next inside of fixing whatever the real problem is. So while you might not see NVMe supported day 1, you will have a highly available system with the components combat tested.

    * Checksums – Every-time baby, no exceptions. Consistency is always ensured.

    * Scale – Nutanix always operates the same way so you know what your getting which leads to consistency as you scale. We don’t flip any bits after X nodes that change resource consumption which may affect performance.

    They’re always trade offs to considers with your workloads. Management of secondary copies in node based architecture is extremely important and in my opinion should take precedence over performance.

    Jun
    12

    Nutanix Acropolis – Hypervisor STIG – Don’t make security a point in time.

    See Nutanix’s self-healing hypervisor STIG using SaltStack Automation behind the scenes.

    Jun
    11

    Nutanix Acropolis – 100 Servers cloned in seconds

    Below is a demo of cloning 100 servers using the Rest-API on the Acropolis hypervisor. You also get a glimpse of how easy it easy to use the Acropolis Command Line as all 100 servers are powered on at the same time.

    Checkout Github for more coding examples.

    Jun
    11

    Nutanix RF3 Cluster losing 2 out 5 nodes

    Nutanix takes a licking but keeps on ticking. Below is a video of the impact of losing 2 out 5 nodes on a Nutanix Cluster with some load running on the box.