Feb
26

Nutanix Cache Money: When to Spend, When to Save

Spend wisely

Spend wisely

Every Nutanix virtual storage controller has local cache to server the workloads running directly on the nodes. A question that comes up is if the local cache should be increased. No ever complained about having too much cache but being a hyper converged appliance we want to keep the RAM available for the running workloads if needed. I would never just recommend giving every controller virtual machine(CVM) 50 GB or 80 GB of RAM and see where that gets you.

The cache on the CVM is automatically adjusted when the RAM of the CVM is increased. I recommend increasing the CVM memory in 2 GB increments and track the effectiveness of the change. Even starting with 16 GB of RAM in a system that has 256 GB of RAM available is only ~6% of the RAM resources available.

Nutanix CVM Resources starting points

Parameter

Configuration

Base

Inline Dedupe

Memory Size

Increase to 16 GB

Increase to 24 GB

Memory Reservation

Increase to 16 GB

Increase to 24 GB

Base (Non-Dedupe)

Go to any CVM IP address and check the startgate diagnostic page http::2009 and use the below guidelines before increasing your RAM on the CVM. You may need to allow access to the 2009 port if you’re accessing the page from a different subnet. This is covered in the setup guide.

Extent Cache

Amount of CVM RAM

Extent Cache Hits

Extent Cache Usage

Recommendation

16 GB

70% – 95%

> 3200 MB

Increase CVM RAM to 18 GB

18 GB

70% – 95%

> 4000 MB

Increase CVM RAM to 20 GB

NOTE: Going higher than 20 GB of RAM on a CVM will automatically start allowing RAM to be used for dedupe. If don’t enable dedupe past 20 GB of RAM you will be wasting RAM resources. You can prevent this from happening by the use of GFLAGs. It’s best to contact support on how to limit RAM being used for dedupe if you feel your workload won’t benefit from it.

Dedupe
Using the Prism UI you can assess if more RAM will help hit rate ratio. Cache from Dedupe is referred to as content cache. The content cache spans over RAM and flash. It is possible to have a high hit rate ratio and have little being served from RAM.

In the Analysis section of the UI check to see how much physical RAM is making up the content cache and what your return on it is.
Screen Shot 2014-02-24 at 4.57.10 PM

If the memory being saved is over 50% of the physical memory being used and the hit rate ratio is above 90%. You can bump up the CVM Memory.

NOTE: For both extent cache and content cache it is possible to have a low hit rate ratio and high usage of resource and still benefit from more RAM. In a really busy system the workload may be too large and might be getting cycled thru the cache before it can hit a consecutive time. It’s our recommendation to increase the CVM memory if you know your maximum limit for CPU on the host. Available memory can help the running workload instead of sitting idle.

Hopefully this helps in giving some context before changing settings in your environment.

Learn more about Nutanix with The Nutanix Bible

Nov
04

Big or Small, Scale Matters in Operations

Sometimes scale get contrived as huge or some quantity of capacity in IT that few shops will ever get to. I think big or small all companies need to the ability to scale. The ability to scale allows customers to buy want they need, when they need it and most importantly use it right away. It can be 6 TB or 60 PB, it’s all relative.

The prized gem for Nutanix and it’s ability to scale revolves around Apache Cassandra(NoSQL) and paxos. Nutanix stores it’s metadata in Apache Cassandra There is a good write up on how Paxos works with NoSQL on Nutanix on the Nutanix Bible. I really enjoyed the ending of a recent article, “Next gen NoSQL: The demise of eventual consistency?”

The next generation of commercial distributed databases with strong consistency won’t be as easy to build, but they will be much more powerful than their predecessors. Like the first generation, they will have true shared-nothing distributed architectures, fault tolerance and scalability.

Why did I enjoy it? Because this is what Apache Cassandra (NoSQL) and paxos is giving to Nutanix today. NoSQL is powerful tool for responding to change and combined with paxos all worries go away. NoSQL’s ability not to need a strict schema allows Nutanix respond to change very efficiently in terms of:

Failures – Nutanix Cassandra has self ring healing in 3.5 where the metadata is evenly distributed. If cassandra process on a node is down for more than 5 minutes. Medusa will trigger the process of detaching the node from cassandra ring. Once the node is detached from the ring, we are ready to take another failure and still remain available.

Upgrades – Only that is constant is change! Nutanix is rapidly adding features and our customers can’t afford downtime. I just read a couple of days ago of company adding dedupe to their product line and the upgrade needed planned downtime. NoSQL allowed Nutanix to add SHA1 hashes to the metadata and carry on to provide inline dedupe without downtime.

Scaling – Nutanix can scale compute and storage at different rates with a variety of different nodes. The process is the same for all the nodes. Hit the expand cluster button, enter in three IP’s, add to the compute node to vCenter. You also have the ability to automate the whole thing! Keep in mind this process is the same for ESXi, Hyper-V and KVM.
Expand Cluster
Scaling is the ability to respond to business change.

Aug
21

One Backup To Rule Them All – vRanger 6.0

Quest Software today announced vRanger 6.0, giving the ability to back up and restore physical Windows servers along with virtual. While the product isn’t GA yet good to see one piece of software for both worlds.
[Read more…]

Jun
12

Nutanix: Drop It Like It’s Hot

Nutanix is all in one solution building block for virtualization. It allows you to virtualize your workloads without requiring a SAN. This approach allows for many benefits, such as buy what you need, when you need it and a reduction in complexity around architecture and operations. I see Nutanix as perfect fit for VDI and Cloud workloads. Where there is uncertainty in the workload and large scale is needed, Nutanix can make a great fit.

Below is how their storage works inside of their 2U building blocks that contain 4 separate nodes. The Name of operating system Nutanix runs on is called HOT(Heat Optimized Tiering). The controller VM is the magic sauce of the operation. All the I\O flows through the controller VM. As data is written to the Fusion-IO card and then is serialized. When the data is cold , it will be laid out to disk in a nice clean format.

The SATA SSD is for ESXi, the Controller VM and VM Swap, nothing else gets to live here.

[Read more…]

May
23

So Your Boss Thinks You’re Stupid – Converged Infrastructure

From Wikipedia, the free encyclopedia

“Converged infrastructure packages multiple information technology (IT) components into a single, optimized computing solution. Components of a converged infrastructure solution include servers, data storage devices, networking equipment and software for IT infrastructure management, automation and orchestration.”

I am not a converged infrastructure hater. I agree with the value proposition that converged infrastructure brings. VBlock from VCE, FlexPod from the reference architecture of NetApp & Cisco, HP with Cloud Matrix. The afore mentioned converged infrastructure solutions are marketing machines. The big players in this space have strong relationships with their current customer base and are using converged infrastructure as a one stop shopping menu for their gear.
[Read more…]

Apr
29

Using Pure Flash or Cache for VDI

Over the last couple of days of Storage Field Day the conversation of Flash vs Cache has been discussed multiple times. Flash vs Cache is an interesting topic for VDI. Do you want to put your whole work load into Flash or use Flash as a Cache and balance the workload with traditional hard drives?

For the purpose of this article I am only listing the vendors that were at Storage Field Day.

Below is list of vendors that are using Pure Flash for their Storage Arrays:

Pure Storage
Nimbus Data
Violin Memory
Kaminario

Below is a list of vendors that are using Flash as a Cache

Nimble Storage
TinTri

The vendors that are offering a end to end solution with Flash are trying to bring down the cost of Flash by using techniques like duplication, commodity hardware, build your own drives and will talk about power savings. The Flash as Cache camp talk overall cheaper cost per GB, need for cheap disk and that sequential IO are still better on spinning disk.

If you’re after an clear winner for Flash vs Cache it’s just not the simple. The feature sets between all the different vendors vary quit a lot and have different value propositions. I think it’s important to break down what you need for a VDI solution and make your decision based on that.

Replication – You need the ability get user data and golden desktop images offsite and protected. This doesn’t have to fast disk all.

Need for Speed
– Your replica’s and linked clones need to be fast. Today’s end users are getting SSD in their laptops. Comparing people’s 5 year old computers to VDI are coming to a close. Your virtual desktop needs to deliver the best performance, consistently.

User Data – profile data, user documents, shortcuts and other users errata. Doesn’t need to be on fast disk unless your making use of redirection. If you’re copying data onto the desktop from a repository you don’t want this to be the bottle neck.

The Trash – Page files, swap files and temp files. They take up lots of space so either you need lots of disk or way to dedupe the data.

Applications – An array providing SMB\CIFS share can go along way for distributing your applications to the desktops. This data\IO will land on the linked clones for the most part but an active non-persistent environment can cause a heavy load on your distribution method of choice.

Over the three days at Storage Field Day I cam really close at changing my stance on which makes the best option. Both Pure Storage and Nimbus have some good products but I still think you need disk. If you where only going to go with one array vendor for VDI I would have to go with Flash as a Cache option. To have only one array vendor in your overall solution can go along way with troubleshooting and managing your environment.

User data is going to continue to grow and I believe more of unstructured, hard to dedupe data will be apart of that make up. Also lots of data will be at rest and never be touched after it’s created, I believe this lends well to a flash as cache scenario. Having the disk in the system also helps for replication if you want to use the standby array for other uses during the day. The replicated data can sit on the disk while other systems can use the flash.

All of the Full SSD vendors of their own unique value proposition like Nimbus with there ultra low cost drives and full feature set of offerings and Pure with their ultra safe no virtual machine never UN-aligned again and dedupe upfront features but I still think you need the spinning rust.

Apr
17

Storage Field Day #1 – Tech Field Day Event – YES & YES

I am lucky to have been invited back to another Tech Field Day Event, Storage Field Day #1. When I was trying to get time off work either by the way of holiday time or getting work to cover the days my boss asked me, what would the company get out of it? It was kind of a deer in the headlights question because of my love of technology. It was like if someone offered you an IPAD3, you response is yes right away. Storage being a very important part of overall IT infrastructure, I knew I wanted to go. Below is my official response.

This three day event will showcase the latest in storage architecture, provide answers for selecting the right storage based on business requirements and will have a chance to pick the brains of the top independent experts. Running storage from companies like EMC and NetApp is great but usually these industry juggernauts are not able adjust to change as quickly as their start-up brothers. Newer companies and start-ups can provide great insight into the future and direction of the storage market.

The Solid State Storage Symposium on the first day of the event will help address questions if we should use Solid-state storage as cache or a high end tier of storage for our current vendor in the data centre. It will also give insight into an emerging field that is littered with new companies all stating they‘re the best thing since sliced bread. Not all things are recreated equal and therefore many advantages and drawbacks need to be considered before implementation.

For myself, Day 2 and Day 3 of the event are seeing what can help me drive virtualization at our company and speed up deployment for our VDI environment. Users want a reliability system, they don’t care what company is running on the backend but they want the same performance day in and day out, if not faster. What techniques can we take away to help our own time to market with a solution or help reduce our risk footprint? I hope to be able to finds to augment our current environment without breaking the bank.

Storage Field day will also provide networking with industry experts/veterans that will give a chance to see what other people are doing in their perspective industries. See what they’re having success with and what to avoid. Whitepapers are great but seldom offer the one point that will make or break a solution.

This will be a great learning event. Check out the live stream at http://techfieldday.com/2012/sfd1/

Other People at the Event you should follow:

Delegate
▪Howard Marks www.deepstorage.net http://twitter.com/DeepStorageNet
▪Fabio Rapposelli http://juku.it/en http://twitter.com/fabiorapposelli
▪Chris M Evans http://thestoragearchitect.com http://twitter.com/chrismevans
▪Ray Lucchesi http://www.RayOnStorage.com http://twitter.com/RayLucchesi
▪Scott D. Lowe Techrepublic, virtualizationadmin.com http://twitter.com/Otherscottlowe
▪Hans De Leenheer http://hansdeleenheer.blogspot.com http://twitter.com/HansDeLeenheer
▪Derek Schaulandtechhelp.cybercreations.net http://twitter.com/webjunkie
▪Robin Harris http://storagemojo.com/ http://twitter.com/StorageMojo
▪Nigel Poulton http://infosmackpodcasts.com http://twitter.com/nigelpoulton
▪Robert Novak http://rsts11.wordpress.com http://twitter.com/gallifreyan
▪Matt Vogt http://blog.mattvogt.net http://twitter.com/mattvogt
▪Arjan Timmerman http://www.vdicloud.nl http://twitter.com/Arjantim

Feb
26

Pure Storage’s Impact on VDI – Tech Field Day

The second day at Virtualization Field Day 2 started at Pure Storage. With a “Psycho” donut, coffee and Red Bull in my stomach I was more than ready to see, listen and focus on their message.

Pure Storage’s main message is they what to replace traditional storage arrays with an all based flash array. Pure Storage is able to deliver relatively the same cost per GB as Traditional HDD based arrays by using:

• Commodity based MLC flash
• Inline Dedupe
• Inline Compression
• Thin provisioning

Pure Storage is able to deliver over 100,000 IOPS per controller pair. If you want to add more capacity you can add up to 4 trays of disks. The trays of disk are connected via 6 Gb/s SAS with everything being
dual pathed. If you need to add performance, you have to add controllers. The controllers are connected via 40 Gb/s Infiniband and you can connect up to 8 of them. When the controllers are connected they’re clustered and provide access across all ports across on all controllers to be used at any time. Each controller has 24 CPU cores, 48GB of working DRAM. Interesting the NVRAM is stored on each controller in a redundant pair. The controllers can go away and you would still be able to recover.

Can you turn Dedupe off?

Nope. I was thinking if you could turn dedupe off that you could maybe increase my performance. Some of other all-flash based arrays offer high IOPS on their arrays but they’re also not trying to replace my spinning hard drive. Keep in mind, their goal is to replace my 15K a & 10K drives in the datacenter, not the massive SATA drives. I tip my hat off to Pure Storage because dedupe has been the failing of more than one fully based SSD array vendor. They might not get the 200,000 IOPS that some of their competitors are tossing down on the RFP papers but they do have dedupe fully working.

Dedupe and compression are necessary for their architecture to work, it:

• Keeps more of the data in DRAM & NVRAM
• Helps with the longevity of the drives. Less data being written and moved around on the SSD drives.
• Less data flowing on the interconnects that otherwise could become the bottleneck.

Impact for VDI
Having everything on SSD makes the architecture design pretty easy. While Pure Storage is not on VMware HCL, they are close at dotting the i’s and crossing the t’s. When all said and done, the will also have support for vStorage APIs for Array Integration (VAAI) – Atomic Test & Set (ATS). I would still create separate volumes to tier my data to be future proof, but at least the operational impact will be non-existent if some else makes a mistake creating a pool.

Having everything on SSD also allows you set the Max on the View Composer without worry. Check this article out for more information.

I would think you would have to redo your network design to take full advantage. Streaming your ThinApp packages could now take advantage of the fast IO on both the repository and the desktops. The 2-3 Gb\s you gave on your blade interconnects for LAN access might be the bottleneck. I also see a huge upside when you go to use Project Horizon. Your application repository will defiantly not be the bottleneck.

It’s obvious but VDI was might to be a green technology. Lots of power and cooling costs to be saved in the datacenter.

Purity , the operating system beyond Pure Storage is what gives the array it’s secret sauce. It’s worth pointing out that it fixes disk alignment issues. This is particular interesting since all linked clones are misaligned. Good practice says to refresh or recompose often but at least this would fix the issue instead of applying a band-aid. I thought the misaligned linked clone issue was fixed with View 5 but I don’t see it in the release notes.

The bad news is they still don’t have replication so you will have to figure out how to get your golden images and user profile information to your DR site. Replication is on the near term road map but this also adds to my next point. Most people are already going to have lots and lots of traditional storage arrays with replication. If I am a customer with replication on my old arrays, why not buy one of the claimed faster SSD arrays that offer more speed and do the tiering myself? Customers will have to make a cost \ operational impact trade off.

It’s also fair to mention that Pure Storage is still not GA yet. This young company looks to have some smart people at the helm and it will be interesting to revist them in the next couple of months. I will try to follow them on their journey and report back when Pure Storage + VDI hits the news.

I will post thier video once the Tech Field Day crew has it ready to go.

Other thoughts around Pure Storage:

Pure Storage Tackles Storage Shenanigans – Wahl Network

Please feel free to comment.

Disclaimer
My travel, accommodations, and meals were paid for by the Tech Field Day sponsors. Often there is swag involved as well. Like me, all TFD delegates are independents. We tweet, write, and say whatever we wish at all times, including during the sessions.

Feb
24

Xangati has Huge Announcement at Tech Field Day: User Based Performance Profiling

I told people tonight that I wasn’t going to write a blog post till later but I clearly can’t sleep so here it goes.

The Xangati Presentation was spot on today at Tech Field Day. Right from the start to the end of their presentation they come to play with appropriate content and information that was timely for the Virtualization and End User Computing(EUC) industry. I will get right into the meat of the their content. They did some great marketing but you’re here at this blog because you want to better your VDI environment. Before getting started I also feel it’s necessary to make it known that I am a current Xangati customer.

So the biggest news of the day that was announced via Xangati, User Based Performance Profiling. The current best practice is to implement a VDI environment is use a non-persistent desktop architecture. The basis of the non-persistent desktops is to split the user persona and applications from the desktop. Non-persistent desktops allow Disaster Recovery (DR) and availability by not having to worry about OS problems. If there is an issue, just grab a different desktop. Who cares about the desktop? Next time you log in with a non-persistent desktop you get a clean image, no matter what the heck you did before in your session.

Doesn’t that sound great? If you are able to pull a non-persistent desktop architecture off, there hasn’t been a proper way to monitor and trend what the user is doing after the fact. Monday you are on VDI and your desktop is “desktop-23”. You log off and next Tuesday morning you grab “desktop-06”. Same thing goes on and on and on. There hasn’t been a way to tie the users activities to performance until recently. While the User Based Performance Profiling is still a future, it’s close. The code is done and it’s being ran in beta.

Current things I love about the product:

    The ability to monitor PCoIP\HDX metrics from the Client to the VM
    Monitors Ready Time, Storage Latency, IO rates, Network Latency, Memory spikes, CPU, spikes
    Insight into the applications being ran on the desktop via WMI
    All metrics are trended and you get alerts based on difference varying form “normal” activity”
    Able to create views or groups of applications that you want to specifically track
    vDS integrated via Net Flow. No need for agent VM’s on your hosts(vSphere 5 only) Video
    Drill down features to get to the root problem

Things that could be better:

    Active Directory integration for user accounts is lacking
    Adding new desktops to monitor is a manual process to get added to the dashboards

Luckily the two things that could be better are things were Xanagti has been listening to their customers. Both pieces should be covered in the next release.

The product has a great framework in place for also monitoring Unified Communications along with VDI. I can see Xangati being bought soon by one of the big vendors. Agent-less monitoring that is vendor diagnostic, Powerful.

Great Overview of the Xangati presentation: Musings of Rodos

Thoughts from the Wahl Network: A Combination Of Bacon, Star Wars, and Performance Monitoring

More Information about Xangati can be found here.

Feb
04

Going to Virtualization Field Day 2 – Silicon Valley

I am very excited and honoured to be a delegate for the next Tech Field Day. Virtualization Field Day 2 is running from February 22 – 24, 2012 in Silicon Valley. I’ve always thought that the Tech Field days have been great whether you were a delegate or watching the streaming content. While all my expenses are paid for getting to the event and during the event, I am under no obligation to blog about anything I see. Will I blog about the presenting sponsors? Most likely because I like technology and sharing what I know. I am sure the content will be great as past events.

Two of the presenters have already be mentioned on Twitter at @TechFieldDay . One of those presenters I have a fair bit of experience with their product. I hope that presnter will give us a road map on their up and coming releases. The other presenter I really don’t know much about at all, so it be interesting to see what they have to offer.

Once the official Presenter list is posted make sure you book some time off to watch it live via the live stream. You’re always welcome to ask questions or if you get tied up at work you can download the content later. The content is usally available about a week or two after the event. If you have something you want me to ask prior to going,let me know. I’ll do my best to get ask the question(s) on your behalf.

A specific vendor you want to see?, Nominate one here.
Want to become a delegate?, learn more here.