Archives for February 2012

Feb
27

#VDI Tip 63: House Cleaning Prior to Building your ThinApp Packages

With the On-premise Horizon Application Manager about to be released, ThinApp will continue to play a huge part in your application management. Below are some quick wins before hitting the build button.

    Stop any services that are running from the installation – Makes sure you don’t have any files locked.
    Remove the Font folder if not needed
    Remove any extra language support. Prime example is the vSphere client.
    ExcludePattern – In the package.ini file you should notice a ExcludePattern. List any file types that you know are not needed. This helps reduce files in hidden folders that you might miss.
    Example
    ExcludePattern=*.bak,*.msi
    Make sure you reboot the capture machine at least once and take a VM snapshot prior too the final capture.
    Make sure you didn’t package the application on a View Desktop – the View agent is any bad thing for the capture process.

Please comment and leave your own tips on this subject.

Feb
26

Pure Storage’s Impact on VDI – Tech Field Day

The second day at Virtualization Field Day 2 started at Pure Storage. With a “Psycho” donut, coffee and Red Bull in my stomach I was more than ready to see, listen and focus on their message.

Pure Storage’s main message is they what to replace traditional storage arrays with an all based flash array. Pure Storage is able to deliver relatively the same cost per GB as Traditional HDD based arrays by using:

• Commodity based MLC flash
• Inline Dedupe
• Inline Compression
• Thin provisioning

Pure Storage is able to deliver over 100,000 IOPS per controller pair. If you want to add more capacity you can add up to 4 trays of disks. The trays of disk are connected via 6 Gb/s SAS with everything being
dual pathed. If you need to add performance, you have to add controllers. The controllers are connected via 40 Gb/s Infiniband and you can connect up to 8 of them. When the controllers are connected they’re clustered and provide access across all ports across on all controllers to be used at any time. Each controller has 24 CPU cores, 48GB of working DRAM. Interesting the NVRAM is stored on each controller in a redundant pair. The controllers can go away and you would still be able to recover.

Can you turn Dedupe off?

Nope. I was thinking if you could turn dedupe off that you could maybe increase my performance. Some of other all-flash based arrays offer high IOPS on their arrays but they’re also not trying to replace my spinning hard drive. Keep in mind, their goal is to replace my 15K a & 10K drives in the datacenter, not the massive SATA drives. I tip my hat off to Pure Storage because dedupe has been the failing of more than one fully based SSD array vendor. They might not get the 200,000 IOPS that some of their competitors are tossing down on the RFP papers but they do have dedupe fully working.

Dedupe and compression are necessary for their architecture to work, it:

• Keeps more of the data in DRAM & NVRAM
• Helps with the longevity of the drives. Less data being written and moved around on the SSD drives.
• Less data flowing on the interconnects that otherwise could become the bottleneck.

Impact for VDI
Having everything on SSD makes the architecture design pretty easy. While Pure Storage is not on VMware HCL, they are close at dotting the i’s and crossing the t’s. When all said and done, the will also have support for vStorage APIs for Array Integration (VAAI) – Atomic Test & Set (ATS). I would still create separate volumes to tier my data to be future proof, but at least the operational impact will be non-existent if some else makes a mistake creating a pool.

Having everything on SSD also allows you set the Max on the View Composer without worry. Check this article out for more information.

I would think you would have to redo your network design to take full advantage. Streaming your ThinApp packages could now take advantage of the fast IO on both the repository and the desktops. The 2-3 Gb\s you gave on your blade interconnects for LAN access might be the bottleneck. I also see a huge upside when you go to use Project Horizon. Your application repository will defiantly not be the bottleneck.

It’s obvious but VDI was might to be a green technology. Lots of power and cooling costs to be saved in the datacenter.

Purity , the operating system beyond Pure Storage is what gives the array it’s secret sauce. It’s worth pointing out that it fixes disk alignment issues. This is particular interesting since all linked clones are misaligned. Good practice says to refresh or recompose often but at least this would fix the issue instead of applying a band-aid. I thought the misaligned linked clone issue was fixed with View 5 but I don’t see it in the release notes.

The bad news is they still don’t have replication so you will have to figure out how to get your golden images and user profile information to your DR site. Replication is on the near term road map but this also adds to my next point. Most people are already going to have lots and lots of traditional storage arrays with replication. If I am a customer with replication on my old arrays, why not buy one of the claimed faster SSD arrays that offer more speed and do the tiering myself? Customers will have to make a cost \ operational impact trade off.

It’s also fair to mention that Pure Storage is still not GA yet. This young company looks to have some smart people at the helm and it will be interesting to revist them in the next couple of months. I will try to follow them on their journey and report back when Pure Storage + VDI hits the news.

I will post thier video once the Tech Field Day crew has it ready to go.

Other thoughts around Pure Storage:

Pure Storage Tackles Storage Shenanigans – Wahl Network

Please feel free to comment.

Disclaimer
My travel, accommodations, and meals were paid for by the Tech Field Day sponsors. Often there is swag involved as well. Like me, all TFD delegates are independents. We tweet, write, and say whatever we wish at all times, including during the sessions.

Feb
24

Xangati has Huge Announcement at Tech Field Day: User Based Performance Profiling

I told people tonight that I wasn’t going to write a blog post till later but I clearly can’t sleep so here it goes.

The Xangati Presentation was spot on today at Tech Field Day. Right from the start to the end of their presentation they come to play with appropriate content and information that was timely for the Virtualization and End User Computing(EUC) industry. I will get right into the meat of the their content. They did some great marketing but you’re here at this blog because you want to better your VDI environment. Before getting started I also feel it’s necessary to make it known that I am a current Xangati customer.

So the biggest news of the day that was announced via Xangati, User Based Performance Profiling. The current best practice is to implement a VDI environment is use a non-persistent desktop architecture. The basis of the non-persistent desktops is to split the user persona and applications from the desktop. Non-persistent desktops allow Disaster Recovery (DR) and availability by not having to worry about OS problems. If there is an issue, just grab a different desktop. Who cares about the desktop? Next time you log in with a non-persistent desktop you get a clean image, no matter what the heck you did before in your session.

Doesn’t that sound great? If you are able to pull a non-persistent desktop architecture off, there hasn’t been a proper way to monitor and trend what the user is doing after the fact. Monday you are on VDI and your desktop is “desktop-23”. You log off and next Tuesday morning you grab “desktop-06”. Same thing goes on and on and on. There hasn’t been a way to tie the users activities to performance until recently. While the User Based Performance Profiling is still a future, it’s close. The code is done and it’s being ran in beta.

Current things I love about the product:

    The ability to monitor PCoIP\HDX metrics from the Client to the VM
    Monitors Ready Time, Storage Latency, IO rates, Network Latency, Memory spikes, CPU, spikes
    Insight into the applications being ran on the desktop via WMI
    All metrics are trended and you get alerts based on difference varying form “normal” activity”
    Able to create views or groups of applications that you want to specifically track
    vDS integrated via Net Flow. No need for agent VM’s on your hosts(vSphere 5 only) Video
    Drill down features to get to the root problem

Things that could be better:

    Active Directory integration for user accounts is lacking
    Adding new desktops to monitor is a manual process to get added to the dashboards

Luckily the two things that could be better are things were Xanagti has been listening to their customers. Both pieces should be covered in the next release.

The product has a great framework in place for also monitoring Unified Communications along with VDI. I can see Xangati being bought soon by one of the big vendors. Agent-less monitoring that is vendor diagnostic, Powerful.

Great Overview of the Xangati presentation: Musings of Rodos

Thoughts from the Wahl Network: A Combination Of Bacon, Star Wars, and Performance Monitoring

More Information about Xangati can be found here.

Feb
22

#VDI Tip 62: Stress Test Your Environment

“Paging Doctor Dwayne,your stress test patient has arrived”.

For many years I heard on the internal paging system Doctors being summed to baseline their patents, to find out where issues may be lurking. In a pervious VDI tip I mentioned that you need to properly stress test your storage system. This still needs to happen to get a proper baseline and will help you with trouble shooting down the road. This does not exclude you from doing a proper environmental stress test. As you scale you need to make all the parts are working together to form one solid application called “VDI”.

With talking to Mark Plettenberg from LoginVSI, he mentioned a customer that went into production with only 1 CPU running on their storage array. A POC wouldn’t have helped out in this situation. When numbers hit the 400 users mark, performance issues start happening and VDI looks bad to your end users (your customers). It can also help find other configuration issues like your anti-virus causing negative impact with scanning more than it should.

If you’re a VMware View shop you can use VMware View Planner or LoginVSI. LoginVSI is vendor neutral so lots of consulting companies lean towards LoginVSI so they can do both View and XenDesktop. Both are able to launch third party applications to get a real test of your environment.

In the VDI game consistency and reliability go a long way. Don’t get caught off guard. Stress test your environment.

Feb
20

#VDI Tip 61: Be Careful when using Client-side Caching with Thin Clients

Client-side caching provides huge bandwidth savings with VMware View 5. Client-side caching can save 30-40 percent of the overall bandwidth. The default size is set to 250 MB and can go up to 300 MB. With older thin clients that have less the 1 GB of RAM you may run into issues. If the client-side cache is set too high, you may start experiencing dropped sessions.

Thanks to Chuck Hirstius, the Master of PCoIP at VMware for making me aware of this issue. Give him a follow at @rexremus

Feb
16

#VDI Tip 60: PCoIP Wireless Monitors for Healthcare

Do you want a wireless PCoIP Monitor with a super long battery life and in a small form factor?

Cisco-Linksys WET11 Wireless Ethernet Bridge,a Samsung NC420, ergotron cart, plus a small APC UPS on the bottom of the cart forms a happy solution for medical staff. Also the lack of sound make it great for operating rooms and patient areas. Any wireless bridge will do, I have even used apple airports before. The only requirement is the Samsung NC240 because of the two network ports that come with the unit. The whole solution will run under 100 watts.

Feb
14

#VDI Tip 59: Speed Up Provisioning of Linked Clones

The below picture was taken from View Best Practices White Paper but what does it all mean?

pae-SVICreationRampFactor and pae-VCAIIRefitRampFactory are used to set the rates for provisioning with the VMware View composer database. I suspect VMware set it low to acomdate users that install the software on old test equipment and wouldn’t be able to handle the load. You wouldn’t want to lose a proof of concept now would you?:-)

While pae-VCAIIRefitRampFactory still remains a mystery to me, I was able to track down some information on pae-SVICreationRampFactor.The maximum value for pae-SVICreationRampFactor is 50. If this value is set to be larger than 50, the value 50 will be used. Adjust this number cautiously and not ramp it up to high right out of the box. For my testing I left it at the recommend values. When I was composing 10 machines with the default values vs new values I saw a decrease of 2 minutes. If you look at the chart above it looks like 10 machines comes close to where the default settings had a spike. While 2 minutes with 10 machines is not that big of a savings, spinning up a DR would have lots of value with this setting. It’s also interesting that pae-SVICreationRampFactor also speeds up the deletion of machines.

If you’ve ever have had to do maintenance work on a weekend you will appreciate this setting. I will try to continue to dig for pae-VCAIIRefitRampFactory information but it seems to be an uphill battle.

For informatkion on how to backup and connect to the ADAM database click here.

You can change the settings by going here:

Thanks goes out to my TAM Shawn Bertin for helping me out late at night to find out this information.

Feb
12

#VDI Tip 58: Backup All Of Your Components At The Same Time

Backing up all of the View components at the same time might seem like an obvious thing to do but it can be harder than you think. Failing to get close to the same backup window for the View Composer DB(SQL) as the ADAM database can leave you in a bit of a mess. Records may not matching up on a restore will have you deleting virtual machines or records from other parts of the ADAM database to fix inconsistencies.

Inside of the View Administrator console there isn’t a set time that you can select from. The Time starts at Midnight and you can only adjust the frequency, hourly, weekly and so on. You can just adjust it to the hourly setting and that should get you close to what your DBA’s are doing.
I don’t like running backups all day if there isn’t a reason . To get an exact start time you have to head off into the ADAM database. Please read this post for help backing up and connecting to the ADAM database before making any changes.

In the ADAM database Expand OU=Properties, select OU=Server, select CN= in the right pane, and choose Action > Properties to modify the pae-LDAPBUTime. The pae-LDAPBUTime property is the Number of minutes after pae-LDAPBUUnits to perform a backup. Since we left the default of Midnight everything will be based off that time. Therefore, if we but in 60 the next backup should fire at 1:00 am.

ADAM VMware View  - Settings

IT WORKED!

Hope that helps. If you have a thought or questions drop me a line.

Feb
11

VMware ADAM Database – Backup & Connect

Working with the ADAM database in VMware View is kind of like going to the dentist. You never want to go there, your reasons for going are usally related to pain, but your glad when it’s all said and done. This post will form the ground work for a series of tips & articles over the next couple of weeks. The article references all locations in terms of Windows 2008. If you’re running Windows 2003, god bless you.
Step 1 – Always Get a Current Backup
In the VMware View Administrator console you get create an on-demand backup Under View Configuration -> Servers.
Once the backup is complete, go to the server you ran the backup on and move the .LDF and .SVI file to a “safe location”.

The backup location is: C:\ProgramData\VMware\VDM\backups
.LDF = ADAM database
.SVI = View Composer Database

Step 2 – Connecting to the ADAM database

To connect to the View ADAM database:
1. Log in to one of the View Connection Servers.
2. Click Start > Administrative Tools > ADSI Edit.
3. In the console window, right-click ADSI Edit and click Connect to.
4. In the Name field type: View ADAM Database
5. Select Select or type a Distinguished Name or Naming Context.
6. In the field below, type dc=vdi,dc=vmware,dc=int
7. Select Select or type a domain or server.
8. In the field below, type localhost
9. Click OK.
10. Click View ADAM Database [localhost] to expand.
11. Click DC=vdi,dc=vmware,dc=int to expand

Feb
10

NetApp & McAfee Team up on Security. Great News for VDI.

NetApp and McAfee are teaming up to deliver enhanced security for Network Based Storage. The product was launched in January 2012 and VirusScan On Board for NetApp is a fully integrated solution. The only catch that I can find is you do need to be running ONTAP 8.1. ONTAP 8.1 is still only available as a release candidate as the time of writing this article.

The management of VirusScan On Board is done through the NetApp management console. I see this being a problem for most customers. Storage and Security usually sit miles apart in the organizationally flow chart. A shining light however is having your security scale alongside your storage. Set and forget is always good in my books.

The timing of finding this product information couldn’t have come at a better time. VMware just released a paper on Antivirus Practices for VMware View 5. In the VMware paper it talks about not scanning the user persona file shares and setting different polices for ThinApp applications. VirusScan On Board has the flexibility to set different on-access and on-demand policies. You would be able to set on-demand scans for the user persona information and set only inbound on-access scan for your ThinApp Repository.

For more information read the article from McAfee.