Dec
11

Nimble Wants To Be King of the CASL

Nimble Storage is an iSCSI hybrid storage array but I think you could almost call it an appliance. They have taken a page out of the Equallogic play book with ease of use management and all the features are included. The company being founded and run by storage industry veterans from NetApp and Data Domain offer the best of those companies into arrray as well. Optimized writes with thier file system called CASL (Cache Accelerated Sequential Layout) and instant high performance compression with there Data Domain background. Nimble Storage is pegging thier CS Storage line as storage, backup and DR. Below is some information that was presented during the Saskatoon VMUG

CASL

Being a new company the get the benfits of not having legacy issues to deal and has allowed them to use today’s multi-core CPUs. The multi-core CPUs and thier natively supported variable block length all them to achiveve compression without suffering the cost of performance. I am told the current controllers on the CS line have two slots for CPU’s but they are only currently using one. It looks like they have planned for some future growth.

The flow of date on the storage array:

    Data Comes into NVRAM amd is acknowledged(72 Hour Battery backed)

    Data Moves to 12G of DRAM and CRC check sums that ensures application integrity.The Data is LZ compressed 2-4X

    CASL writes sequential Stripes to SSD & SATA based on data characteristics and ensures no white space

    SSDs are 100% cache for Reads

A copy of all active or “hot” data (and even “semi-active” data) is held in the large flash layer, enabling very fast random read performance. CASL’s index tracks hot data blocks in real time, adapting to hot spot and application workload changes in a real time fashion. The flash works a JBOD since all the writes are comitted. This is great feature since you get use all of the disk lowering the total cost.

Snapshots

The CASL snapshots stores compressed, incremental snapshot data on high capacity disk. The are positioning the arrays to also removed the need for backup due to having 60-90 days of snapshots on the array with not taking up much space. I know as old Equallogic customer this was a big problem in the past. I would say NetApp if good at this today. They also want you to pair it with another CS array to provide compressed, deuped replication. The replication is only asynchronous for right now but they noted you could can replicate as often as you want to a second system.

Network

You can have 6 1 Gb\s connections (4 for workload, 1 for target discovery and 1 for management). You also have the option of 2 1 GB for target discovery and management with 2 10 Gb\s connections.N imble arrays come with dual (redundant), hot swappable controllers.

Some Features I liked:

Perofmance Policies:
The ability to easily manage and match your applications to optimize the experince. For example the Exchange log policy would not cache any of the data.

VM consistent instant snapshots, by coordinating with Microsoft VSS and VMware APIs

vCebter plug-in that allows for zero copy

Nimble mentioned only needing 12 disks to achive 20,000 exchange mailboxes

Nimble works with VMware SRM

Thoughts to think about:

What happens when you want to just to add capacity? I got into the pickle with other vendors.

Lots of IP invested into the file system for speed. What happens in 5+ year when could be all SSD?

Would be great when the cluster nodes together to you the DRAM as one single shared Cache

VDI numbers per appliance seemed low, look to another blog post for information directly related to this.

Thanks to Nimble Storage for sponsoring the Saskatoon VMUG. I look forward to seeing how your company grows in 2012.

Nimble Storage VDI Benchmarking by Dan Brinkmann: http://blog.danbrinkmann.com/2012/04/10/nimble-storage-performance-testing/

Nov
30

Top 5 Performance Tips to Virtualizing Microsoft SQL

1.       Spread the Wealth

Try to keep similar workloads spread out on your vSphere Clusters.  Mix and match high Compute programs with your high disk I/O programs.  This way you have less of a chance of starving your resources. Inside your VM you should still split up database and transaction logs as you would in a physical world. Separating the random I/O patterns(data) and sequential I/O patterns (logs) will help your storage system out.

2.       Adjust the Belt Notches

You can achieve higher network throughput by increasing the transmit coalescing size.  Editing the value of Net.vmxnetThroughputWeight from 0 to 128 will help at the cost of increasing latency.  The setting can be found under the Software settings, advanced settings, then Net. Transmit coalescing is available on the VMXNET2, VMXNET3 and 1000g adapters.

 3.       If You Got, Flaunt It

Use the PVSCI adapters. The PVSCI adapter is more efficient because it allows for batching of I/O requests while the hypervisor is looking for more work. VMware has ran tests and found a 6% percent increase with I/O  throughput over the LSI adapter.  I would also add separate PVSCI controller for your data volumes and log volumes. The PVSCI adapter had a caveat that your workload should be over 2,000 IOPS or you may see a increase in latency. With the release of 4.1 now you can use the adapter under either small or large I/O workload.

 4.       If it’s locked,  it must be good

Any time you can stop paging to disk is good thing. You can prevent Windows from paging out the buffer pool to disk by locking the buffer pool memory .

http://support.microsoft.com/kb/918483

If you are going to do this make the balloon driver doesn’t kick in and defeat everything you are trying to do.  If you make sure all the RAM is reserved you shouldn’t have any problems.

 5.       Go Big or Go Home

Using Large Memory pages can reduce the overhead for your ESX servers.

You can see how to get it going at http://blogs.technet.com/b/sql_server_isv/archive/2010/11/30/trace-flag-834-and-when-to-use-it.aspx

There are lots gotchas so read the article and make sure you have a newer processer that has hardware MMU.

The idea is to have a TOP 10, I will add more points when time permits.

-Dwayne Lessner