In case you missed Part 1 – Part 1: Built For Scale and Agility
No it’s not Billy Gibbons, Dusty Hill, or drummer Frank Beard. It’s Zeus and Zookeeper providing the strong blues that allow the Nutanix Distributed File System to maintain it’s configuration across the entire cluster.
In case you missed Part 1 – Part 1: Built For Scale and Agility
Lots of talk in the industry about how had software defined storage first and who was using what components. I don’t want to go down that rat hole since it’s all marketing and it won’t help you at the end of the day to enable your business. I want to really get into the nitty gritty of the Nutanix Distributed Files System(NDFS). NDFS has been in production for over a year and half with good success, take read of the article on the Wall Street Journal.
Below are core services and components that make NDFS tick. There are actually over 13 services, for example our replication is distributed across all the nodes to provide speed and low impact on the system. The replication service is called Cerebro which we will get to in this series.
This isn’t some home grown science experiment, the engineers that wrote the code come from Google, Facebook, Yahoo where this components where invented. It’s important to realize that all components are replaceable or future proofed if you will. The services\libraries provide the API’s so as newest innovations happen in the community, Nutanix is positioned to take advantage.
All the services mentioned above run on multiple nodes in cluster a master-less fashion to provide availability. The nodes talk over 10 GbE and are able to scale in a linear fashion. There is no performance degradation as you add nodes. Other vendors have to use InfiniBand because they don’t share the metadata cross all of the nodes. Those vendors end up putting a full copy of the metadata on each node, this eventually will cause them to hit a performance cliff and the scaling stops. Each Nutanix node acts a storage controller allowing you to do things like have a datastore of 10,000 VM’s without any performance impact.
While the diagram can look a little daunting, rest assured the complexity has been abstracted away for the end user. It’s a radical shift in data center architecture and will be fun breaking it down.
Below is a listing of the new and great things in Login VSI 4.0 which is GA tomorrow. I think the greatest thing from my perspective is ease of use that was focused in the new release. Login VSI is going from a tool used by Partners and Consultants to a tool that can be used by the regular joe. Not that anyone couldn’t figure it out before, it just wasn’t something you would keep running as a customer.
At a feature level, Direct Desktop Launch (DDL) is my prized gem for this release. From a Nutanix perspective I can take 3 or 4 nodes in a separate cluster and quickly figure out the impact of hardware or software changes on my VDI environment with out a lot of required infrastructure. Take the number of desktops I can run after the change and divide by the number of nodes I was testing with. So even if I have 5,000 VDI production environment, I really only need to maintain 4 node test environment to get a deep understanding of a pending change on my environment. I think customers should be looking at this tool moving forward.
* Nutanix is able to sense traffic patterns and can prevent flooding the high performance tier with replication
* Both offer top class Time to Value. With Nutanix’s ability to go from shrink wrapped to production in 1 hr or less paired with easy to setup replication from Veeam. Remote sites can be easily sprung up. Set and forget.
* IT teams don’t have double the time to setup DR facilities. Veeam gives the ability to provide replication from legacy storage arrays to the Nutanix Complete Cluster
I have presented at BriForum from 2011 – 2012, vForums and various VMUGS but never at VMworld. I had a pretty lack lustred effort this year on my submission to BriForum and it showed. I was also betting on two other vendors for my APEX + GPU + View 5.2 to become a reality at BriForum so I am not surprised nor mad.
I’ve made serious efforts with VMworld sessions in the past with out much luck. I am very hopeful this year though. My session this year – 5060 Benchmarking the Horizon Workspace Appliance with Performance and Scalability – is pretty cool. The session is a chance to showcase new a approach to load testing the Horizon Workspace. I hope I get the chance to co-present with Manrat Chobchuen from VMware.
If doesn’t happen it will be great for my local VMUG!
“By 2017, the major public cloud compute architectures will be common architectures in enterprises.”
“The greatest transformation from the cloud will come from true scale-out application architectures.”
Gartner Data Center Conference, Keynote: Rethink Infrastructure and Operations to Dramatically Reduce Costs, Raymond Paquet, December 2012.
Nutanix is helping to bring Private Cloud to the Enterprise by embracing the Software Defined Data Center(SDDC) story. The engineers of Nutanix, which consist of people from Google, Aster Data, Facebook, Oracle, Yahoo(should be enough name dropping) have helped to bring all the intelligence out of big iron hardware and into software so it can be quickly scaled up and out. In essence they have flattened the data center, combining storage and compute in one easy nice pill to sallow. The pill happens to be in dense block form factor but the magic is in the software.
* DR is holistic for VDI and the rest of your data center
* Predictable scalability
* Largest Customers today all started with one block
* One node at a time
* Server and Desktop Virtualization are not the same
While I wish VDI was the silver bullet for all desktops in the Enterprise, VDI can’t fit every use case. VMware Horizon Mirage can help address the use cases that don’t fit the bill, whether that’s the corporate exec in and out of airplanes, heavy intense graphics based machines in remote area or in Oil refinery with many serial based peripherals. Unfortunately not every place of the world was blessed with a persistent Internet connection, so Mirage to the rescue?
Mirage can provide centralization, management and recovery of desktops is a pretty clean fashion. Using recovering points & imaging technology, Mirage can provide recovery with optimal traffic network optimization considering the task of what it is performing. The founders of Mirage actually help to form the base of Cisco’s Wide Area Application so you can get some idea of how that helps the overall product. Knowing that Mirage can help with file recovery, OS migrations and even hardware migration makes it a pretty intriguing product. The one short fall with all physical devices left in the wild is security. One of VDI strengths is that if keeps data off the endpoint . If VDI is not a option you are left to take a different course of action. Usually implementing BitLocker, Windows Encrypted File System (EFS) or some form of full disk encryption (FDE) would wreck havoc on product like this. My goal is see what limitations Mirage would have working when security was being imposed.
The first thing to do for my testing purposes and if you have a CEO\VP working on critical documents, is to change the default system configuration settings. If you don’t, you only get to fall back to the last daily snapshot. From looking at the file structure of Mirage if you were to do a “sync now” from the client it would keep a snapshot of the sync but the rules listed to the right would rules the roost so to speak. If you had more snapshots then the rules allowed saved they would be deleted. From the image it’s also a good opportunity to change the default location for your CIFS\SMB share.
|In my previous post I had talked about how to setup caching on Nutanix. Here we are creating two 1 TB drives to our Windows 2012 VM that will reside on our NFS volume that that has compression enabled. We use two different scsi controller for performance. More disks, more threads, better performance. Keep in mind that Mirage will only support 1500 connections per Mirage Server so there is much point throwing all your storage on one Mirage Server – Scale it Out with DFS. [Read more...]|