Sep
15

HYCU v1.5 – Nutanix AHV Backup gets new features

HYCU v1.5 has been released by Comtrade.

The biggest one for me is ABS support! Know you can use cheap and deep storage and drive all of the storage controllers. Remember that ADS works with ABS so its a great solution.

The following new features and enhancements are available with Hycu version 1.5.0:

Backup window
It is now possible to set up a time frame when your backup jobs are allowed to run (a backup window). For example, this allows you to schedule your backup jobs to run on non-production hours to reduce loads during peak hours.

Concurrent backups
You can now specify the maximum number of concurrent backup jobs per target. By doing so, you can reduce the duration of backups and the amount of queued backup jobs.

Faster recovery
Hycu enables you to keep snapshots from multiple backups on the Nutanix cluster. Keeping multiple snapshots allows you to recover a virtual machine or an application quickly, reducing downtime. If your know Commvault Intelisnap, very similar benefits

iSCSI backup target
A new type of backup target for storing the protected data is available—iSCSI, which also makes it possible for you to use a Nutanix volume group as a backup target. You can use the iSCSI backup target for backing up the Hycu backup controller as well.

Improved backup and restore performance
Hycu now utilizes Nutanix volume groups for backing up and restoring data, taking advantage of the load balancing feature offered by Nutanix. Therefore, the new version of Hycu can distribute the workload between several nodes, which results in increased
performance of your backup and restore operations, and reduced I/O load on the Nutanix cluster and containers.

Support for AWS S3-compatible storage
Hycu enables you to store your protected data to AWS S3-compatible storage.

Shared location for restoring individual files
You can now restore individual files to a shared location so that recovered data can be accessed from multiple systems.

Support for Active Directory applications
In addition to SQL Server applications, Hycu can now also detect and protect Active Directory applications running on virtual machines. You can view all the discovered applications in the Applications panel.

Expiring backups manually

If there is a restore point that you do not want to use for a data restore anymore, you can mark it as expired. Expired backups are removed from the backup target within the next 24 hours, resulting in more free storage space and helping you to keep your Hycu system clean.

Support for Nutanix API changes

Hycu supports Nutanix API changes introduced with AOS 5.1.1.1

Sep
14

AOS 5.1.2 Security Updates

A long list of updates, one-click upgrade yourself to safety.

CVE-2017-1000364 kernel: heap or stack gap jumping occurs through unbounded stack allocations ( Stack Guard or Stack Clash)

CVE-2017-1000366 glibc: heap or stack gap jumping occurs through unbounded stack allocations (Stack Guard or Stack Clash)

CVE-2017-2628 curl: negotiate not treated as connection-oriented

CVE-2017-3509 OpenJDK: improper re-use of NTLM authenticated connections (Networking, 8163520)

CVE-2017-3511 OpenJDK: untrusted extension directories search path in Launcher (JCE, 8163528)

CVE-2017-3526 OpenJDK: incomplete XML parse tree size enforcement (JAXP, 8169011)

CVE-2017-3533 OpenJDK: newline injection in the FTP client (Networking, 8170222)

CVE-2017-3539 OpenJDK: MD5 allowed for jar verification (Security, 8171121)

CVE-2017-3544 OpenJDK: newline injection in the SMTP client (Networking, 8171533)

CVE-2016-0736 httpd: Padding Oracle in Apache mod_session_crypto

CVE-2016-1546 httpd: mod_http2 denial-of-service by thread starvation

CVE-2016-2161 httpd: DoS vulnerability in mod_auth_digest

CVE-2016-8740 httpd: Incomplete handling of LimitRequestFields directive in mod_http2

CVE-2016-8743 httpd: Apache HTTP Request Parsing Whitespace Defects

CVE-2017-8779 rpcbind, libtirpc, libntirpc: Memory leak when failing to parse XDR strings or bytearrays

CVE-2017-3139 bind: assertion failure in DNSSEC validation

CVE-2017-7502 nss: Null pointer dereference when handling empty SSLv2 messages

CVE-2017-1000367 sudo: Privilege escalation in via improper get_process_ttyname() parsing

CVE-2016-8610 SSL/TLS: Malformed plain-text ALERT packets could cause remote DoS

CVE-2017-5335 gnutls: Out of memory while parsing crafted OpenPGP certificate

CVE-2017-5336 gnutls: Stack overflow in cdk_pk_get_keyid

CVE-2017-5337 gnutls: Heap read overflow in read-packet.c

CVE-2017-1000366 glibc: heap/stack gap jumping via unbounded stack allocations

CVE-2017-1000368 sudo: Privilege escalation via improper get_process_ttyname() parsing

CVE-2017-3142 bind: An error in TSIG authentication can permit unauthorized zone transfers

CVE-2017-3143 bind: An error in TSIG authentication can permit unauthorized dynamic updates

CVE-2017-10053 OpenJDK: reading of unprocessed image data in JPEGImageReader (2D, 8169209)

CVE-2017-10067 OpenJDK: JAR verifier incorrect handling of missing digest (Security, 8169392)

CVE-2017-10074 OpenJDK: integer overflows in range check loop predicates (Hotspot, 8173770)

CVE-2017-10078 OpenJDK: Nashorn incompletely blocking access to Java APIs (Scripting, 8171539)

CVE-2017-10081 OpenJDK: incorrect bracket processing in function signature handling (Hotspot, 8170966)

CVE-2017-10087 OpenJDK: insufficient access control checks in ThreadPoolExecutor (Libraries, 8172204)

CVE-2017-10089 OpenJDK: insufficient access control checks in ServiceRegistry (ImageIO, 8172461)

CVE-2017-10090 OpenJDK: insufficient access control checks in AsynchronousChannelGroupImpl (8172465, Libraries)

CVE-2017-10096 OpenJDK: insufficient access control checks in XML transformations (JAXP, 8172469)

CVE-2017-10101 OpenJDK: unrestricted access to com.sun.org.apache.xml.internal.resolver (JAXP, 8173286)

CVE-2017-10102 OpenJDK: incorrect handling of references in DGC (RMI, 8163958)

CVE-2017-10107 OpenJDK: insufficient access control checks in ActivationID (RMI, 8173697)

CVE-2017-10108 OpenJDK: unbounded memory allocation in BasicAttribute deserialization (Serialization, 8174105)

CVE-2017-10109 OpenJDK: unbounded memory allocation in CodeSource deserialization (Serialization, 8174113)

CVE-2017-10110 OpenJDK: insufficient access control checks in ImageWatched (AWT, 8174098)

CVE-2017-10111 OpenJDK: incorrect range checks in LambdaFormEditor (Libraries, 8184185)

CVE-2017-10115 OpenJDK: DSA implementation timing attack (JCE, 8175106)

CVE-2017-10116 OpenJDK: LDAPCertStore following referrals to non-LDAP URLs (Security, 8176067)

CVE-2017-10135 OpenJDK: PKCS#8 implementation timing attack (JCE, 8176760)

CVE-2017-10193 OpenJDK: incorrect key size constraint check (Security, 8179101)

CVE-2017-10198 OpenJDK: incorrect enforcement of certificate path restrictions (Security, 8179998)

Sep
14

Acropolis Dynamic Scheduler (ADS) for AHV (Compute + Memory + Storage)

The Acropolis Dynamic Scheduler (ADS) ensures that compute (CPU and RAM) and storage resources are available for VMs and volume groups (VGs) in the Nutanix cluster. ADS, enabled by default, uses real-time statistics to determine:

Initial placement of VMs and VGs, specifically which AHV host runs a particular VM at power-on or a particular VG after creation.

Required runtime optimizations, including moving particular VMs and VGs to other AHV hosts to give all workloads the best possible access to resources.

If a problem is detected, a migration plan is created and executed thereby eliminating hotspots in the cluster by migrating VMs from one host to another. This feature only detects the contentions that are currently in progress. You can monitor these tasks from the Task dashboard of the Prism Web console. You can click the VM link to view the migration information, which includes the migration path (to the destination AHV host).

The Acropolis block services feature uses the ADS feature for balancing sessions of the externally visible iSCSI targets.

Jul
06

Recovery Points and Schedules with Near-Sync on Nutanix

Primer post on near-sync

For the GA release near-sync will be only offered with a telescopic schedule (time based retention). When you set the RPO <=15min to >=1 min you will have the option to save your snapshots for X number of weeks or months.

As example if you set the RPO to 1 min and schedule 1 month retention it would look like this:

X= is the RPO
Y = is the schedule

Every X min, create a snapshot retained for 15 mins (These are the Light-Weight Snaps. They appear as normal snap in Prism)
Every hour create a snapshot retained for 6 hours.
Every day, create a snapshot retained for 1 week
One weekly snapshot retained for 4 weeks (If you select a schedule to retain for 7 weeks, Y would be 7 weeks and no monthly snap would occur)
One Monthly snapshot retained for Y months.

Subject to change, as we’re still finalizing sizing and thresholds based on real-world testing but the user will have an option to change these retention values via NCLI.

SCHEDULE

Jul
06

Delete AFS Forcefully

There can be instances when graceful removal of file server does not work and you may see following errors, this can happen when the file server is not available and has been deleted without following the right process. Sometimes the FSVMs get deleted instead of using the delete workflow in file server section of Prism.

ncli fs delete uuid=________________________
Error: File server with uuid xxxxx-xxxxx-xxxxx is offline and unable to fetch file server VMs.

Use the following command to delete the file server permanently from the Minerva Database. This is run from any CVM.
minerva –fs_uuids _________________ force_fileserver_delete

file server UUID can be obtained from ncli fs ls command.

Jul
05

Powering Off and Starting Up AFS – Native File Services on Nutanix

There really isn’t a need to shut down AFS but moves and maintenance are a part of life. Here are the steps for a clean shutdown….

-DL

Shutting Down:
• Power off all guest VMs on the cluster, leaving only FSVM’s and CVM’s powered on.
• From any CVM run: minerva -a stop
• The stop command will stop AFS services and power off the FSVMs for all File Servers
• Once only the CVM’s remain powered on, run Cluster Stop from any CVM.
• Power Off the CVM’s and Hosts

Starting Up:
• Power on Hosts
• CVMs will auto-start once the Host is up
• Once all CVMs are up, run Cluster Start to initiate cluster services
• Verify all services are up with Cluster Status
• From any CVM run: minerva -a start
• The start command will power on the FSVMs for all File Servers and start AFS services
• Power on all remaining guest VMs

BmPPOTXIcAA1jBj

Jul
04

Will DR and Backup power AHV sales to 50%?

.Next has come and gone like your favorite holiday. Tons of hustle and bustle with great euphoric feelings followed by hitting a wall and being extremely tired. The Nutanix conference was chalked filled with announcements but the most powering to me were the ones related to DR and Backup. AHV being built with Cloud in mind is surging but adoption has been slowed by 3rd Party backup support. You can have this great automated hypevisor with best in class management but if you can’t back it up easily it will curb adoption.

This number will grow rapidly now with all of the backup and DR

This number will grow rapidly now with all of the backup and DR options

So before .Next 2017 DR and Backup Options for AHV included:
Commvault with support for IntelliSnapp
• Time Stream – Native DR with Time Stream
o 1 node backup with the NX-1155
o Backup/DR to Storage Only Clusters
o Cloud Connect to AWS and Azure
o DR/Backup to a full cluster.
• Any backup software with agents

After .Next 2017 announcements for AHV Backup and DR support include:
HYCU from Comtrade – Rapidly deployed software using turn-key appliances. Great choice if you have some existing hardware that you can use or place onto Nutanix. Point, click, done. Check out more here.
Rubrik – A hardware based appliances that do the heavy lifting for you. Check out more here.
Veeam – Probably best known for making backup easy on ESXi have announced support for AHV later this year. Nutanix added Veeam as a Strategic Technology Partner within the Nutanix Elevate Alliance Partner Program. Going Green!
Druva – Nutanix users can now take full advantage of Druva Phoenix’s unique cloud-first approach, with centralized data management and security. Only ESXi today and agents with AHV but agentless support is coming. More here.
• Backup and DR to full Nutanix Clusters get near-sync to achieve very low RPO. Read more on near-sync here.
Xi Cloud Services, a native cloud extension to the Nutanix Enterprise Cloud Platform that powers more than 6000 end-customers around the globe. This announcement marks another significant step towards the realization of our Enterprise Cloud vision – delivering a true cloud experience for any application, in any deployment model, using an open platform approach. For the first time, Nutanix software will be able to be consumed as a cloud service.

Maybe 50% for AHV is a lofty goal but I can see 40% by next year for new sales as people focus on their business rather than the day to day to headaches. With a very strong backing in backup and DR AHV growth will flourish.

Jun
30

The Down Low on Near-Sync On Nutanix

Nutanix refers to its current implementation of redirect-on-write snapshots as vDisk based snapshots. Nutanix has continued to improve on its implementation of snapshots by adding in Light-Weight Snapshots (LWS) to provide near-sync replication. LWS uses markers instead of creating full snapshots for RPO 15 minutes and under. LWS further reduce overhead with managing metadata and remove overhead associated high number of frequent caused by long snapshot chains. The administrator doesn’t have to worry about setting a policy between using vDisk snapshots or LWS. Acropolis Operating System (AOS) will transition between the two forms of replication based on the RPO and available bandwidth. If the network can’t handle the low RPO replication will transition out of near-sync. When the network is OK again to meet the near-sync requirements AOS will start using LWS again. In over-subscribed networks, near-sync can provide almost the same level protection a synchronous replication without impacting the running workload.

The administrator only need to set the RPO, no knowledge of near-sync is needed.

The administrator only need to set the RPO, no knowledge of near-sync is needed.

The tradeoff is that all changes are handled in SSD when near-sync is enabled. Due to this trade off Nutanix reserves a percentage of SSD space to be used by LWS when it’s enabled.

near-sync

In the above diagram, first a vDisk based snapshot is taken and replicated to the remote site. Once the fully replication is complete, LWS will begin at the set schedule. If there is no remote site setup LWS will happen locally right way. If you have the bandwidth available life is good but that’s not always the case in the real world. If you miss your RPO target repeatedly it will automatically transition back to vDisk based snapshots. Once vDisk based snapshots meets occurs fast enough it will automatically transition back to near-sync. Both transitioning out and into near-sync is controlled by advanced settings called gflags.
One the destination side AOS creates hydration points. Hydration points is a way for the LWS to transition into a vDisk based snapshot. The process for inline hydration is to:

1. Create a staging area for each VM (CG) that’s protected by the production domain
2. The staging area is essentially a directory with a set of vdisks for the VM.
3. Afterwards, any new incoming LWS’s will be applied to the same set of vdisks.
4. And the staging area can be snapshotted from time to time and then you would have individual vdisk-backed snapshots.

The source side doesn’t need to hydrate as a vDisk based snapshot is taken every hour.

Have questions? Please leave a comment.

Jun
29

ROBO Deployments & Operations Best Practices on Nutanix

The Nutanix platform’s self-healing design reduces operational and support costs, such as unnecessary site visits and overtime. With Nutanix, you can proactively schedule projects and site visits on a regular cadence, rather than working around emergencies. Prism, our end-to-end infrastructure management tool, streamlines remote cluster operations via one-click upgrades, while also providing simple orchestration for multiple cluster upgrades. Following the best practices in this new document ensures that your business services are quickly restored in the event of a disaster. The Nutanix Enterprise Cloud Platform makes deploying and operating remote and branch offices as easy as deploying to the public cloud, but with control and security on your own terms.

One section I would like to call out in the doc is how to seed your customer data if your dealing with poor WAN links.

Seed Procedure

The following procedure lets you use seed cluster (SC) storage capacity to bypass the network replication step. In the course of this procedure, the administrator stores a snapshot of the VMs on the SC while it’s installed in the ROBO site, then physically ships it to the main datacenter.

Install and configure application VMs on a ROBO cluster.
Create a protection domain (PD) called PD1 on the ROBO cluster for the VMs and volume groups.
Create an out-of-band snapshot S1 for the PD on ROBO with no expiration.
Create an empty PD called PD1 (same name used in step 2) on the SC.
Deactivate PD1 on the SC.
Create remote sites on the ROBO cluster and the SC.
Retrieve snapshot S1 from the ROBO cluster to the SC (via Prism on the SC).
Ship the SC to the datacenter.
ReIP the SC.
Create remote sites on the SC cluster and on the datacenter main cluster (DC1).
Create PD1 (same name used in steps 2 and 4) on DC1.
Deactivate PD1 on DC1.
Retrieve S1 from the SC to DC1 (via Prism on DC1). Prism generates an alert here, but though it appears to be a full data replication, the SC transferred metadata information only.
Create remote sites on DC1 and the ROBO cluster.
Set up a replication schedule for PD1 on the ROBO cluster in Prism.
Once the first scheduled replication is successful, you can delete snapshot S1 to reclaim space.

To get all of the best practices please download the full document here, https://portal.nutanix.com/#/page/solutions/details?targetId=BP-2083-ROBO-Deployment:BP-2083-ROBO-Deployment

Jun
28

Rubrik and AHV: Say No to Proxies

rubrik
The last couple of years I am a huge fan of backup software that removes the need for having proxies. Rubrik provides a proxy-less backup solution by using the Nutanix Data Services Virtual IP address to talk directory to each individual virtual disk that it needs to back up.
Rubrik and Nutanix have some key advantages with this solution:
• AOS 5.1+ with version 3 API’s provides change region tracking for quick efficient backup with no hypervisor based snap. This allow for quick and efficient snapshots.
• With AHV and data locality, Rubrik can grab the most recently changed data without flooding the network which can happen when the copy and VM might not live on the same host. For Nutanix the reads happen locally.
• Rubrik has access to ever virtual disk by making an iSCSI connection to bypass the need of proxies.
• AOS can redirect the 2nd RF copy away from a node with it’s advanced data placement if the backup load becomes too great during a backup window. Thus protecting your mission critical apps that running 24-7.
• Did I mention no proxies? 🙂

Stop by the Rubrik booth and catch their session if your at .Next this week.