Jun
    28

    HYCU for You: Icing on the cake for AHV

    More-bang-for-your-buck-min

    HYCU is a purpose-built application data protection solution for Nutanix. HYCU is coming out of the gate with support for AHV and some key value propositions in mind:
    a. 100% Application-focus
    b. Backup to NAS &/or Cloud
    c. Built to be hypervisor-agnostic. Today it uses changed region tracking API’s available from AOS. Over time HYCU will use those same API’s for other hypervisors.
    d. Recover in <2 minutes, deploy in <3 minutes, and learn in <4 minutes. HYCU is developed by Comtrade Software, a Boston-based company. They also develop monitoring solutions like SCOM management packs and Microsoft OMS solutions for Nutanix. Comtrade has really became a part of Nutanix during the development phase. The slack channel between the two companies was great to track progress and not to mention software that meet its release date ahead of schedule! Pick Your Backup Destination?

    HYCU provides classic backup and restore through simple and intuitive workflows. You can pickup from a variety of targets to store your data.
    • Backup data within datacenter and/or to the cloud
    o Nutanix storage
    o Third party storage – If you got it, use it.
    • Cloud storage – Efficient backup to AWS and Azure that doesnot require cloud-based VM. In most cases the VM running is more costly than the storage so this is a great feature.

    Other uses cases
    • Application discovery
    o Compliance
    • Enabling self-service for VM & App/DB Administrators
    o Power to protect against impact of patches / upgrades
    o Protects SQL out of the box
    o Rapid, context sensitive restores
    • Restore to alternative location for test / debug / reporting / verification
    • Full automation / orchestration through REST API integration

    I believe what Veeam did for VMware early can happen again with HYCU for Nutanix. As more and more backup options hit the market for AHV it will be interesting to follow this. If you want to take a spin for your Nutanix CE cluster, sign up here: https://www.comtradesoftware.com/free-trial/

    Jun
    28

    Rubrik and AHV: Say No to Proxies

    rubrik
    The last couple of years I am a huge fan of backup software that removes the need for having proxies. Rubrik provides a proxy-less backup solution by using the Nutanix Data Services Virtual IP address to talk directory to each individual virtual disk that it needs to back up.
    Rubrik and Nutanix have some key advantages with this solution:
    • AOS 5.1+ with version 3 API’s provides change region tracking for quick efficient backup with no hypervisor based snap. This allow for quick and efficient snapshots.
    • With AHV and data locality, Rubrik can grab the most recently changed data without flooding the network which can happen when the copy and VM might not live on the same host. For Nutanix the reads happen locally.
    • Rubrik has access to ever virtual disk by making an iSCSI connection to bypass the need of proxies.
    • AOS can redirect the 2nd RF copy away from a node with it’s advanced data placement if the backup load becomes too great during a backup window. Thus protecting your mission critical apps that running 24-7.
    • Did I mention no proxies? 🙂

    Stop by the Rubrik booth and catch their session if your at .Next this week.

    Jun
    20

    Backing Up AFS Home Shares with Commvault

    You cannot back up an Acropolis File Services (AFS) home shares with CommVault software until you change a setting on AFS. You need to let Commvault have access to the home share without the use of reparse ponts. A home share is the repository for the user’s personal files and is distributes the top-level directories across all of the file server VMs for performance and ease of management. The home share contains reparse point attributes in its top level directories to help with referrals. Since CommVault automatically skips these directories for backup because of the reparse points we make the below change.

    AFS can disable reparse points for registered client(s) and reparse points is enabled for other clients which are not registered. I would list all of your proxies and media agents with this command.

    Run this command on any file server VM

    $ scli smbcli “backup hosts” “

    May
    01

    Acropolis Container Services on #Nutanix

    This is the first release of a turnkey solution for deploying Docker containers in a Nutanix cluster. Instead of swiping your credit card for AWS EC2 you can deploy your containers through the built in Self Service Portal. Now it’s not all totally new because Nutanix previously released a volume plug-in for Docker. What is new is:

    * The Acropolis Container Services(ACS) provisions multiple VMs as container machines to run Docker
    containers on them.
    * Containers are deployed apart of projects. In projects, users can deploy VM’s or containers. You can assign quotas to the projects over, storage, CPU and memory.
    * ACS can use the public Docker registry is provided by default, but if you have a separate Docker registry you
    want to use, configure access to that registry as well.
    * One-Click upgrades for the Container machines.
    * Basic monitoring with a containers view in the self-service portal allows you to view summary information about containers connected to this portal and access detailed information about each container.

      Apr
      20

      Moby Project Summit Notes

      The Moby Project was born out of the containerd / Docker Internals Summit

      For components to be successful they need to be successful everywhere. which lead into SwarmKit being mentioned as not being successful because no other ecosystem was using it. Seems to be a strong commitment to make everything into a component out in the open.

      Docker wants to be seen as a open source leader thru doing the hard work to support components.

      All open-source development will be under the Moby project.

      Upstream = components
      Moby = Staging area for products to move on like containerd is in the CNF project.
      – Heart of open-source activities, a place to integrate components
      – Docker remains docker
      – Docker is built with Moby
      – You use Moby to build things like Docker
      – Solomon mentions “1000 of smart people could disagree on what to do”, Docker represents it’s opinion. It’s a lot easier to agree on low level functions because there is few ways to do them.
      – Moby will end up as go libraries in Docker but that will go away.

      Moby is connected to Docker but it’s not Docker. Name inspired from the Fedora project.

      Moby is a trade off to get it out in the open early versus completeness

      GitHub should be used a support forum.

      InfraKit is a toolkit for creating and managing declarative, self-healing infrastructure. It breaks infrastructure automation down into simple, pluggable components. These components work together to actively ensure the infrastructure state matches the user’s specifications. Although InfraKit emphasizes primitives for building self-healing infrastructure, it also can be used passively like conventional tools

      LinuxKit, a toolkit for building custom minimal, immutable Linux distributions.

      – Secure defaults without compromising usability
      – Everything is replaceable and customisable
      – Immutable infrastructure applied to building Linux distributions
      – Completely stateless, but persistent storage can be attached
      – Easy tooling, with easy iteration
      – Built with containers, for running containers
      – Designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes
      – Designed from the experience of building Docker Editions, but redesigned as a general-purpose toolkit

      No master plans to change away for go.

      Breaking out the monolithic engine API will mostly likley done with gRPC. gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services.

      SwarmKit Update
      SwarmKit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

      New Features

      – Topology-Aware Scheduling
      – Secrets
      – Service Rollbacks
      – Service Logs
      Improvements
      – HA scheduling
      – Encrypted Raft Store
      – Health-Aware Orchestration
      – Synchronous CLI
      What is Next?
      – Direct integration of containerd into SwarmKit by passes the need for Docker Engine
      – Config Management to attach configuration to services
      – Swarm Events to watch for state changes and gRPC Watch API
      – Create a generic runtime to support new run times without changing SwarmKit
      – Instrumentation

      LibNetwork Update
      – Quality More visibility, motioning and troubleshooting.
      – Local-scoped network plugins in Swarm-mode
      – Integration with containerd

      Feb
      17

      IP Fail-Over with AFS

      A short video showing the client IP address moving around the cluster to quickly restore connectivity for your users running on Acropolis File Services.

      Feb
      16

      Nutanix AFS DR Failing over from vSphere to AHV (video 3:30)

      A quick video showing the fail-over for Acorpolis File Services. The deployment setups a lot of the need peices but you will still have to set a schedule and map the new container(vStore) that is being used by AFS to the remote site.

      Remember you want the number of FSVMS making up the file server to be the same of less than the number of nodes at the remote site.

      Feb
      13

      Docker Datacenter: Usability For Better Security.

      With the new release of Docker Datacenter 2.1 it’s clear the Docker is very serious about the enterprise and providing the tooling that is very easy to use. Docker has made the leap to supporting enterprise applications with its embedded security and ease of use. DCC 2.1 and Docker-engine-cs 1.13 give the additional control needed for operations and development teams to control their own experience.

      Docker datacenter continues to build on containers as a service. In the 1.12 release of DDC it enabled agility and portability for continuous integration and started on the journey of protecting the development supply chain throughout the whole lifecycle. The new release of DDC’s focuses on security, specifically secret management.
      The previous version of DDC already had wealth of security features
      • LDAP/AD integration
      • Role based access control for teams
      • SS0 and push/pull images with Docker Trusted Registry
      • Imaging signing – prevent running a container unless image signed by member of a designated
      • Out of the box TLS with easy setup, including cert rotation.

      With the DDC 2.1 the march on security is being made successful by allowing both operations and developers to have a usable system without having to lean into security for support and help. The native integration with the management plane allows for end to end container lifecycle management. You also inherit the model that’s independent no matter the infrastructure you’re running on it will work. It can be made to be dynamic and ephemeral like the containers it’s managing. This is why I feel PAAS is dead. With so much choice and security you don’t have to limit yourself where you deploy to, a very similar design decision to Nutanix by enabling choice. Choice gives you access to more developers and the freedom to color outside the lines of the guardrails that a PAAS solution may empose.

      Docker Datacenter Secrets Architecture

      ctr-secruity3
      1) Everything in the store is encrypted, notably that includes all of the data that is stored in the orchestration . With least privlege — only node is distributed to the containers that need them. Since the management mayor is scalable you also get that for your key management as well. Due to the management layer being so easy to set up you don’t have developers embedding secrets in Github to get a quick work around.
      2) Containers and the filesystem makes secret only available to only the designated app . Docker expose secrets to the application via a file system that is stored in memory. The same rotation of certificates for the management letter also happens with the certificates for the application. In the diagram above the red service only talks of the red service and the blue service is isolated by itself even though it’s running on the same node as the red service/application.
      3) If you decide that you want to integrate with a third-party application like Twitter and be easily done. Your Twitter credentials can be stored in the raft cluster which is your manager nodes. When you go to create the twitter app you give it access to the credentials and even do a “service-update” if you need swap them out without the need to touch every node in your environment.

      With a simple interface for both developers and IT operations both have a pain-free way to do their jobs and provide a secure environment. By not creating road blocks and slowing down development or operations teams will get automatic by in.

      Feb
      07

      Nutanix AFS – Maximums

      Nutanix AFS Maximums – Tested limits. (ver 2.0.2)
      Configurable Item Maximum Value
      Number of Connections per FSVM 250 for 12 GB of memory
      500 for 16 GB of memory
      1000 for 24 GB of memory
      1500 for 32 GB of memory
      2000 for 40 GB of memory
      2500 for 60 GB of memory
      4000 for 96 GB of memory
      Number of FSVMs 16 or equal to the number of CVMs (choose the lowest number)
      Max RAM per FSVM 96 GB (tested)
      Max vCPUs per FSVM 12
      Data size for home share 200 TB per FSVM
      Data size for General Purpose Share 40 TB
      Share Name 80 characters
      File Server Name 15 characters
      Share Description 80 characters
      Windows Previous Version 24 (1 per hour) adjustable with support
      Throttle Bandwith limit 2048 MBps
      Data Protection Bandwith limit 2048 MBps
      Max recovery time objective for Async DR 60 minutes

      s-l300

      Jan
      24

      App Volumes: Reprovisioning fails with AppStacks set to computer based assignments

      Symptoms
      Linked clone virtual machines provisioning tasks fails.
      Recompose fails due to customization failing to join the desktops to domain.
      Cause
      This issue occurs due to AppStacks being attached during the domain join process.

      On reboot after domain join c:\svroot cache is cleared losing changes to the VM.

      Resolution
      To resolve this issue, disable the App Volumes Service on the parent virtual machine.
      Open a command prompt as administrator and run the following commands
      sc config "svservice" start= disabled
      net stop "App Volumes Service"
      ipconfig /release
      Shutdown the virtual machine and take a snapshot.

      Create a script or batch file as below to set the service to automatic and start the service.
      sc config "svservice" start= auto
      net start "App Volumes Service"

      Copy the script to the parent virtual machine to a directory you can reference later.
      In View Administration portal you will have to reference your post-synchronization script:

      Open up View Administration Portal
      Go to Catalog – Desktop Pools – Select your pool
      Click Edit
      Select Guest Customization Tab
      Enter the file path for script in post-synchronization script name:

      C:\scripts\script.bat

      Recompose the pool
      VMware KB 2147910