Nutanix Acropolis Files Services (SMB/NFS) adds Health Page for Advanced troubleshooting in 3.0.1

With the release of 3.0.1 Nutanix AFS has added a health page for advanced troubleshooting. While it’s probably not the sexiest thing in the world it does put it on par with similar pages that we have today for Stargate like the 2009 page and Curator on 2010. Since this is more for support I think it’s okay that it’s not the same as Prism. It’s always a fine line in the UI on what to add and what to leave out. You can find more information about the existing troubleshooting pages on the Nutanix Bible.

The AFS health page is a FSVM-level webpage showing statistics and health related information about major AFS components. It runs at the default AFS port: 7502. It can be accessed using links CLI browser or web browser at http://:7502. The Internal IP would be best choice to use and I would firewall off the external port.

Below are some examples of what you will find when you go to the AFS Health Page.


Free CCNA Lab Guide

Free CCNA Lab guide from community memeber, Neil Anderson. You can run all the labs completely for free on your laptop, no additional equipment is necessary. Full instructions and startup files are provided so you can immediately get into the hands on practice you need to master Cisco networking and pass the exam.

Available at: https://www.flackbox.com/cisco-ccna-lab-guide

350-page Complete Lab Exercises with Full Solutions

The IOS Operating System
The Life of a Packet
The Cisco Troubleshooting Methodology
Cisco Router and Switch Basics
Cisco Device Management
Routing Fundamentals
Dynamic Routing Protocols
Connectivity Troubleshooting
RIP Routing Information Protocol
EIGRP Enhanced Interior Gateway Routing Protocol
OSPF Open Shortest Path First
VLANs and Inter-VLAN Routing
DHCP Dynamic Host Configuration Protocol
HSRP Hot Standby Router Protocol
STP Spanning Tree Protocol
Port Security
ACL Access Control Lists
NAT Network Address Translation
IPv6 Addressing
IPv6 Routing
WAN Wide Area Networks
BGP Border Gateway Protocol
Cisco Device Security
Network Device Management


Nutanix AFS and SMB 3.0

After you upgrade using the AFS (Acropolis File Services)2.2 bits you will have to manually change the max allowed protocol. This will be fixed with AFS 2.2.1 but here are the steps to get you going.

The following commands listed below will set the max protocol to the proper version

scli smbcli get --section global --param "server max protocol"
server max protocol = SMB2

scli smbcli set --section global --param "server max protocol" --value SMB3_00
smb.conf update is successful

scli smbcli get --section global --param "server max protocol"
server max protocol = SMB3_00

Also note if you want to run FSlogix and need an AFS share to have a block size of 512 bytes, this can be done. Default is 1024.


Top 10 Reasons Why Nutanix Is The Best Platform For Horizon View

1) Turn key-solution for desktops and RDSH with vGPU.
2) All roads lead to a non-persistent desktop, which means App Volumes and UEM or 3rd Party (Liquidware). Best home for user and profile data is Acropolis File Services(AFS). VDI is scale out, so should your NAS.
3) Easy restore of user files setting on AFS.
4) Easy DR for AFS.
5) AFS Home Share can spread over mutiple VMs allowing for only 1 Group Policy to manage.
6) Data locality for boot storms if you’re not using instant-clones and protection from noisy neighbour under load.
7) 2nd Copy of all data is placed based on capacity and performance to help with #5.
8) Shadow Clones and inline-dedupe for App Volume for in-memory Applications.
9) One Click Upgrades to get features, maintenance and security fixes without having to upgrade ESXi.
10) Over 400 health checks to make sure your desktops run smoothly.


HYCU for You: Icing on the cake for AHV


HYCU is a purpose-built application data protection solution for Nutanix. HYCU is coming out of the gate with support for AHV and some key value propositions in mind:
a. 100% Application-focus
b. Backup to NAS &/or Cloud
c. Built to be hypervisor-agnostic. Today it uses changed region tracking API’s available from AOS. Over time HYCU will use those same API’s for other hypervisors.
d. Recover in <2 minutes, deploy in <3 minutes, and learn in <4 minutes. HYCU is developed by Comtrade Software, a Boston-based company. They also develop monitoring solutions like SCOM management packs and Microsoft OMS solutions for Nutanix. Comtrade has really became a part of Nutanix during the development phase. The slack channel between the two companies was great to track progress and not to mention software that meet its release date ahead of schedule! Pick Your Backup Destination?

HYCU provides classic backup and restore through simple and intuitive workflows. You can pickup from a variety of targets to store your data.
• Backup data within datacenter and/or to the cloud
o Nutanix storage
o Third party storage – If you got it, use it.
• Cloud storage – Efficient backup to AWS and Azure that doesnot require cloud-based VM. In most cases the VM running is more costly than the storage so this is a great feature.

Other uses cases
• Application discovery
o Compliance
• Enabling self-service for VM & App/DB Administrators
o Power to protect against impact of patches / upgrades
o Protects SQL out of the box
o Rapid, context sensitive restores
• Restore to alternative location for test / debug / reporting / verification
• Full automation / orchestration through REST API integration

I believe what Veeam did for VMware early can happen again with HYCU for Nutanix. As more and more backup options hit the market for AHV it will be interesting to follow this. If you want to take a spin for your Nutanix CE cluster, sign up here: https://www.comtradesoftware.com/free-trial/


Acropolis Container Services on #Nutanix

This is the first release of a turnkey solution for deploying Docker containers in a Nutanix cluster. Instead of swiping your credit card for AWS EC2 you can deploy your containers through the built in Self Service Portal. Now it’s not all totally new because Nutanix previously released a volume plug-in for Docker. What is new is:

* The Acropolis Container Services(ACS) provisions multiple VMs as container machines to run Docker
containers on them.
* Containers are deployed apart of projects. In projects, users can deploy VM’s or containers. You can assign quotas to the projects over, storage, CPU and memory.
* ACS can use the public Docker registry is provided by default, but if you have a separate Docker registry you
want to use, configure access to that registry as well.
* One-Click upgrades for the Container machines.
* Basic monitoring with a containers view in the self-service portal allows you to view summary information about containers connected to this portal and access detailed information about each container.


    Moby Project Summit Notes

    The Moby Project was born out of the containerd / Docker Internals Summit

    For components to be successful they need to be successful everywhere. which lead into SwarmKit being mentioned as not being successful because no other ecosystem was using it. Seems to be a strong commitment to make everything into a component out in the open.

    Docker wants to be seen as a open source leader thru doing the hard work to support components.

    All open-source development will be under the Moby project.

    Upstream = components
    Moby = Staging area for products to move on like containerd is in the CNF project.
    – Heart of open-source activities, a place to integrate components
    – Docker remains docker
    – Docker is built with Moby
    – You use Moby to build things like Docker
    – Solomon mentions “1000 of smart people could disagree on what to do”, Docker represents it’s opinion. It’s a lot easier to agree on low level functions because there is few ways to do them.
    – Moby will end up as go libraries in Docker but that will go away.

    Moby is connected to Docker but it’s not Docker. Name inspired from the Fedora project.

    Moby is a trade off to get it out in the open early versus completeness

    GitHub should be used a support forum.

    InfraKit is a toolkit for creating and managing declarative, self-healing infrastructure. It breaks infrastructure automation down into simple, pluggable components. These components work together to actively ensure the infrastructure state matches the user’s specifications. Although InfraKit emphasizes primitives for building self-healing infrastructure, it also can be used passively like conventional tools

    LinuxKit, a toolkit for building custom minimal, immutable Linux distributions.

    – Secure defaults without compromising usability
    – Everything is replaceable and customisable
    – Immutable infrastructure applied to building Linux distributions
    – Completely stateless, but persistent storage can be attached
    – Easy tooling, with easy iteration
    – Built with containers, for running containers
    – Designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes
    – Designed from the experience of building Docker Editions, but redesigned as a general-purpose toolkit

    No master plans to change away for go.

    Breaking out the monolithic engine API will mostly likley done with gRPC. gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services.

    SwarmKit Update
    SwarmKit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

    New Features

    – Topology-Aware Scheduling
    – Secrets
    – Service Rollbacks
    – Service Logs
    – HA scheduling
    – Encrypted Raft Store
    – Health-Aware Orchestration
    – Synchronous CLI
    What is Next?
    – Direct integration of containerd into SwarmKit by passes the need for Docker Engine
    – Config Management to attach configuration to services
    – Swarm Events to watch for state changes and gRPC Watch API
    – Create a generic runtime to support new run times without changing SwarmKit
    – Instrumentation

    LibNetwork Update
    – Quality More visibility, motioning and troubleshooting.
    – Local-scoped network plugins in Swarm-mode
    – Integration with containerd


    Docker Datacenter: Usability For Better Security.

    With the new release of Docker Datacenter 2.1 it’s clear the Docker is very serious about the enterprise and providing the tooling that is very easy to use. Docker has made the leap to supporting enterprise applications with its embedded security and ease of use. DCC 2.1 and Docker-engine-cs 1.13 give the additional control needed for operations and development teams to control their own experience.

    Docker datacenter continues to build on containers as a service. In the 1.12 release of DDC it enabled agility and portability for continuous integration and started on the journey of protecting the development supply chain throughout the whole lifecycle. The new release of DDC’s focuses on security, specifically secret management.
    The previous version of DDC already had wealth of security features
    • LDAP/AD integration
    • Role based access control for teams
    • SS0 and push/pull images with Docker Trusted Registry
    • Imaging signing – prevent running a container unless image signed by member of a designated
    • Out of the box TLS with easy setup, including cert rotation.

    With the DDC 2.1 the march on security is being made successful by allowing both operations and developers to have a usable system without having to lean into security for support and help. The native integration with the management plane allows for end to end container lifecycle management. You also inherit the model that’s independent no matter the infrastructure you’re running on it will work. It can be made to be dynamic and ephemeral like the containers it’s managing. This is why I feel PAAS is dead. With so much choice and security you don’t have to limit yourself where you deploy to, a very similar design decision to Nutanix by enabling choice. Choice gives you access to more developers and the freedom to color outside the lines of the guardrails that a PAAS solution may empose.

    Docker Datacenter Secrets Architecture

    1) Everything in the store is encrypted, notably that includes all of the data that is stored in the orchestration . With least privlege — only node is distributed to the containers that need them. Since the management mayor is scalable you also get that for your key management as well. Due to the management layer being so easy to set up you don’t have developers embedding secrets in Github to get a quick work around.
    2) Containers and the filesystem makes secret only available to only the designated app . Docker expose secrets to the application via a file system that is stored in memory. The same rotation of certificates for the management letter also happens with the certificates for the application. In the diagram above the red service only talks of the red service and the blue service is isolated by itself even though it’s running on the same node as the red service/application.
    3) If you decide that you want to integrate with a third-party application like Twitter and be easily done. Your Twitter credentials can be stored in the raft cluster which is your manager nodes. When you go to create the twitter app you give it access to the credentials and even do a “service-update” if you need swap them out without the need to touch every node in your environment.

    With a simple interface for both developers and IT operations both have a pain-free way to do their jobs and provide a secure environment. By not creating road blocks and slowing down development or operations teams will get automatic by in.


    Demo Time – Nutanix CE and VSA’s

    In order to successfully complete your home lab, you’re going to need configure compute (the servers), networking (routers and switches etc.) and storage. For those that are solely interested in studying or testing an individual application, operating system, or the network infrastructure, you should be able to complete this with no more storage than the local hard drive in your PC.

    For those who are looking to learn how cloud and data center technologies work as a whole however, you’re going to require some form of dedicated storage. A storage simulator or a Virtual Storage Appliance (VSA) or Nutanix CE is likely to be the best option for this task.

    If you’re studying hypervisor technologies you’re going to have to spend on compute hardware as well as any of the network infrastructure devices that are incapable of being virtualized. Unless you have a free flowing money source, you’re most likely going to want to contain the storage costs by using virtualized storage rather than SAN or NAS hardware.

    The Flackbox blog has compiled a lengthy and comprehensive list of all the available simulators and VSAs. All of the software is free but may require a customer or partner account through the vendor to be able to download. The login and system requirements for every option are included in the list as well. Thanks to Neil for putting those together.

    Nutanix CE can be seen as having high requirements for a home lab but once you factor that management is included it’s not that bad. You can also you a free instance with Ravello.

    If you don’t meet the requirement you can always use OpenFiler or StarWind if you have gear at home.

    For those looking to mimic their organization’s production environment as closely as possible, choose the VSA or simulator from your vendor.

    GUI demos are also included at the bottom of the list. These are not designed or suitable for a lab but are great for those looking to get a feel of a particular vendor’s Storage GUI.


    THE WORD FROM GOSTEV – 3rd Party Backups aren’t going away.

    First off the Veeam newsletter is great and you should sign up. There was one comment that I found interesting was regarding the need for backups. I’ve always said that while Nutanix has a great integrated backup story sometimes it doesn’t meet all of the requirements needed by a business. Getting it out of the storage vendor’s hands is a wise decision. While Nutanix and every other vendor does rigourous QA the fact remains is that were still human and problems can occur.

    Something like this has to happen once in a while so that everyone is reminded that storage snapshots are not backups – not even if you replicate them to a secondary array, like these folks did > HPE storage crash killed Australian Tax Office. You may still remember the same issue with EMC array crash disabling multiple Swedish agencies for 5 days not so long ago. These things just happen, this is why it is extremely important to make real backups by taking the production data out of the storage vendor’s “world” – whether we’re talking about classic storage architectures, or up and coming hyper-converged vendors (one of which have not been shy marketing < 5 min "backup" windows lately).

    Food for thought, in the end it will be what meets the needs of your business. AKA Can you live with the pain.