Archives for November 2014

Nov
27

Thankful for No CLI for Drive Replacment

Like most things in life, things are not always as easy as they seem. NOS 4.0.2 is as simple as the below picture.

ssd_remove_nx3050

Remove from the disk from the UI, stick the new drive in, done. The new drive will get added back into the storage pool automatically (if there is only one storage pool, which is the default)

Nov
26

Web-Scale 101 eBook

Web-Scale 101 eBook is packed with information on converged infrastructure, web-scale tech, quiz questions, and benefits of using web-scale properties inside of your data center. A great visual resource to learn about Cassandra, Hadoop, Paxos and ZooKeeper and how they can help your virtual environment. I must say there is a really good quote at the end of the book too 🙂

Go get your copy today by clicking on the book below.

book

Nov
25

EUC TIP: Have a slow logon times? via Fermin Echegaray – Nutanix Support

This post is courtesy of Fermin Echegaray, a Golbal Support Engineer at Nutanix. This goes to show why Nutanix has one of the highest customer satisfaction ratings in the industry. If it’s running on Nutanix, we are going to help.

Some time ago ​I found this very nifty tool while working with a customer; it is helpful in determining if GPOs are causing a slow logon time.
http://www.sysprosoft.com/policyreporter.shtml

It needs to be installed on one of the VMs and it should assist you with setting up verbose policy for the logging, but if it fails to do so, these are the manual directions:

Define a value at the registry like this:

HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon
Entry: UserEnvDebugLevel
Type: REG_DWORD
Value data: 30002 (Hexadecimal)

To make sure you have current log data, do the following:
Go to %systemRoot%\Debug\UserMode and delete or rename the current Userenv.log; Log off and log back on to reproduce the problem. A new Userenv.log will be produced.

I found once with this tool that this customer’s IE Branding policy took 14 seconds to complete, disabling it obviously accelerated the logon time.

Nov
24

VVOLS – Improving Per VM Management

Lots up hype around VVOls (virtual volumes) these days as we approach the next vSphere release. When VVOLS was first talked about in 2011 I wasn’t really impressed. The idea of VVOLs didn’t seem new to me because Nutanix at the time had already broken free of LUNS and were already well down path of Per-VM management. Fast forward today I still think there is a little bit of get out jail card for traditional storage vendors but my attitude has defiantly changed. There is just a wealth of information that is available out on the web, like the VMworld website and you can see the direction that VMware is going with it. Not only does VVOLS give per VM management, it separates out capacity from performance and simplifies connecting storage. I personally think VMware is making a very smart tactical move. VMware is creating their own control plane. This will be a heavily used notch that their competitors will have to overcome.

To me the crown jewel of the whole implementation is VASA ( vSphere Storage APIs for Storage Awareness ) which was first introduced in vSphere 5.0. VASA was limited in its initial release, vendors could only assign one capability to each datastore and users could also supply one capability. VASA 2.0 is probably what a lot of people thought it was initial going to be with VASA 1.0. VASA 2.0 is the foundation for Storage Policy Based Management (SPBM) in my mind. VASA with SPBM will allow for placement of virtual disks, gives insights into the newly found storage containers and will also allow better support for Storage DRS. I am also hopping VVOLS eliminates the limit of 2048 VM’s per datastore when High Availability is enabled. We preach giant datastores at Nutanix so it will be nice to have everything on par.

Nutanix is supporting VVOLS but no commitment to public timeline and I don’t lead engineering but my intelligent guess is that VASA 2.0 will get implemented with our ZooKeeper layer that is responsible for maintaining configuration. Like VASA 2.0, ZooKeeper is highly available and already keeps track of all of the containers you create on Nutanix. VASA 2.0 and ZooKeeper are also similar in that they’re not in the IO path. vCenter controls the activation of VASA providers but after that it’s host to provider so vCenter can be shutdown without impact of availability.

Protocol EndPoint(PE) is another component that makes up the VVOLs family. PE helps abstract connecting to all of your VVOLS whether your iSCSI, NFS and fiber channel. With Nutanix you don’t have to really worry about connecting your underlying storage or setting up multi-pathing, this all taken care of you under the covers. PE may or may not cause a lot of existing storage vendors a lot of grief. Additional overhead will be needed to take into account because now instead of flopping over a LUN to another controller you’re flopping over possibly hundreds of VVOLS.
If you look at the breakup of a VVOL, use see many VVOLS actually make up one virtual machine.

5 Types of VVOls:

• Config-VVOL – Metadata
• Data-VVOL – VMDK’s
• Mem-VVOL – Snapshots
• Swap-VVol – Swap files
• Other-VVOls – Vendors

Simply put there will be a lot more connections to manage. This could be additional stress on the “hardware added” re-sellers. If you’re relying on NVRAM and you’re already full or tight on space this is going to make matters better. Nutanix has always had to do this so I would think things would change much here. Currently today any file over 512 bytes is mapped to a vDisk on the Nutanix side so any overhead should stay the same.

VVOLs is also introducing the concept of storage containers. When I give a demo of Nutanix Prism UI I have been saying at least for the last year that our containers are just a way of grouping VM’s that you want to have the same policy or capabilities. VVOLS and Nutanix are pretty similar this way. Both VVOLs Storage Containers and Nutanix Containers share these common traits:

• A storage container is only limited by available hardware.
• You must have at least one storage container
• You can have multiple storage containers per array/file system
• You assign the capabilities to the storage container

VVOL storage containers will not allow for you span storage controllers unless that is already built into the underlying storage system.

The exciting bits is that VVOLS will enable you to get a lot of the same functionality that you would have had to gone into the storage system UI to get like snapshots and compression. While I think this is great for management and user experience I think a lot of people are going to have to re-examine their security on vCenter. Nothing like giving access to everyone to create unlimited snapshots, let’s see what happens! It is probably more of an enterprise issue though. Last VMUG I was at in Edmonton when I asked if the storage person and the virtual person where the same person the vast majority of people put up their hand. I guess more checks and balances are never a bad thing.

Personally it’s great to see the similarities in architecture being used for VVOLs compared to Nutanix. While there will some heavy lifting in regards to API integration at least the meat and potatoes of the product won’t have to change to accommodate. In general VVOLs will make life easier for Nutanix and our end users so thumbs up here.

Please comment below if have thoughts on the topic.

Nov
14

Why Did The #Dell + #Nutanix Deal happen? Bob Wallace on theCube Explains

After spending several years trying to carve itself a slice of the convergence infrastructure market to little avail, Dell Inc. finally changed strategies in June, teaming up with Nutanix Inc. to supply the hardware for its wildly popular hyperconverged appliances. Bob Wallace, the head of OEM sales for the fast-growing startup, appeared on SiliconANGLE’s theCUBE in a recent interview to share how the alliance is helping both companies pursue their respective goals with hosts Dave Vellante and Stu Miniman.

Check out the SiliconANGLE YouTube Channel

Nov
14

NOS 4.0.2 – Exclusive Usage Bytes VS Changed Bytes

Prior to 4.0.2, Only changed bytes existed to help tell the tail of the difference between snapshots and was misleading depending on how you interpret the results. Today both fields exist so you can see how much space the snapshots are taking up and how much IO is going through the system.

Exclusive Usage Bytes:
Total space that can be reclaimed when this snapshot is deleted and garbage collected.
Changed bytes:
Amount of user I/O to the entities in this protection domain between the previous snapshot and this snapshot.

bytes

The exclusive bytes field is calculated with the power of MapReduce and stats are fed back to the Prism UI. This particular stat is not real time and takes approximately about an hour to show up.

Nov
10

Nutanix Async Replication – Save Your Flash

Today Nutanix provide Async Replication as of NOS 4.0.2.

Since lots of small remote sites have limited bandwidth it doesn’t make sense impacting running workloads and wearing out your flash on the destination side. If the aggregate incoming bandwidth required to maintain the current change rate is <= 500 Mb/s it is recommended to skip the performance tier(SSD's). This is a general guideline with a 4 node cluster. As your cluster grows you can add 100 Mb/s per node to that number. For Example, For a 12 node cluster it should be able to safely handle a change rate of <= 1,300 Mb/s. I would recommend the destination container be brand new as well when you setup the remote site if possible. That way you can apply the appropriate polices without impacting other workloads. If licensing permits I would also use MapReduce Compression on the destination container to save space. To skip the performance tier, use the following command from the NCLI ncli ctr edit sequential-io-priority-order=DAS-SATA,SSD-SATA,SSD-PCIe name=

You can always reserve the above change to the defaults if your perform a fail over.

NOTE: SSD’s are fully covered under support regardless of usage.