Archives for March 2013

Mar
29

My View of Hadoop Distributions from the Passenger Seat

This blog post comes from the passenger seat of my Yukon as I head to the lake. It’s a brief musing of thoughts on Hadoop that would really fit into 140 characters.

Hadoop is here and making its way into the Enterprise. Data growth will explode 50X according to IDC over the next decade. The 50X doesn’t even include all data that will flow thru your business. This data represents competitive advantage, you just need the ability to collect and analyze it.

Hadoop known mostly for analyzing large datasets in a batch process is rapidly changing. “Just In Time” processing is now a reality. SQL and NoSQL are getting mashed together and data stored in HDFS is not having to get moved out to be analyzed.

Battle of the Distro’s

MapR – Taking the approach of releasing a more proprietary release of Hadoop. Fast out if the gate, they seem to be doing well. My fear is that they will get to far down one path and won’t be able use the power of the community. They do have committers on their team so that should help. They also have partnerships with Google and Amazon.

Cloudera – A mix between proprietary and open-source. They have seen success and it can be contributed to the tools that they have built to help run and maintain their distribution. Lots of talk about Impala, super fast query performance against HDFS and HBase. Jim Hammerbacher, pervious from Facebook gives them a lot street cred.

Hortonworks – Taking the long term approach, Hortonworks is 100% open source. They make their revenue off training and support services for their distribution. They have a Impala like project called Stinger. The difference is they are still using Hive, just speeding it up by orders of magnitude. I personally dig Hortonworks because they seem to have strong support around virtualizing Hadoop. I also like Hortonworks partnership with Microsoft, sure to help speed up SQL performance.

Intel – Seems to be focusing around security with making the best use of there CPU’s and their SSD’s for compression and encryption. Personally I don’t see how that gives them a leg up on the other distributions as all of them could use their hardware. Intel seems to going the OEM route which is not surprising. I think there relationship with SAP will bold really well for them in the enterprise space.

Please leave your own thoughts, a very interesting landscape.

Mar
28

Old or New, It’s a GP 4 U: NVIDIA VCA

Now that GPU virtualization is being supported at the hypervisor layer both user and vendor alike are rushing to use and get new products out the door, Nutanix included. Like anything new in the Enterprise space, the wheels of change can sometimes be slow. Upgrading a 2,000+ VDI enviroment consisting of hypervisor and your favorite flavour of VDI can take some time to sort out and push through the bowls of change management. On the vendor side of the house the GRID K1 & GRID K2 cards are huge power sucking beasts, new hardware is going to be needed. If your thinking about putting these cards in a server that’s not listed you might want to talk to the NVIDA rep that the server has enough airflow to cool the card.
[Read more…]

Mar
22

Horizon View Composer Appliance

View Composer gets stuck with a lot of the heavy lifting but is also the source of a lot problems with View. Now that you can split off the View Composer service onto it’s own machine why not turn it into your own appliance? I’ve always liked the idea of fast all in one recovery of services. SQL Server 2008 R2 Express Database size limit was increased to 10GB so it should be more than sufficient to hold your View Composer Database.
[Read more…]

Mar
09

NOS: The CIA of Storage

The Nutanix Operating System(NOS) is what helps to form the Zen state of compute and storage that runs the Nutanix Distributed File System. NOS is radically different than any other solution on the market today because it does something that no one has figured out, it keeps the intelligence of the OS local to both the compute and storage. The ability of NOS to make decisions on the same wire as the compute and storage is what gives Nutanix the ability to scale without limits and with linear performance.

Beautiful state when you are one with everything

Beautiful state when you are one with everything

Other companies are “trying” to use local flash but I haven’t seen anything really compelling. All of the traditional legacy storage companies end up like the first 1:30 of the below video. Hightower being a monolithic storage array trying to fit into today’s servers.

Compute along with local IO are a reality today but it doesn’t make any sense if the intelligence of the system is sitting in the back seat. Telling a customer that they can only use PCI-e for reads doesn’t seem like a compelling solution. Its like letting the CIA to do surveillance and then not letting them take action(writes). The other one I like is telling a customer to install an agent in all of there 10,000 VM’s so they can use flash that lives over a wire. It leaves you asking why bother?

NOS also makes the best use of your budget, commodity hardware. Commodity hardware beats in price and performance, its simple economies of scale. Companies that turn profit on how well their data centers are ran, don’t use expensive, overpriced specialized parts. Go ask Google and Facebook what is sitting in their servers. I bet you could easily go buy the same parts. The catch is having the software to scale it and to monitor it. Once you have these to things, commodity hardware allows you focus on features and value instead of getting into a hardware arms race. Nutanix has had three major releases in the span of 1 and half years. That is something that is unheard of in the industry.

While Nutanix parts have a very high MTBF(Mean Time Between Failure) NOS was built to expect failure. All the components are replaceable and data is protected with utmost importance. Check out my article here for more details.

Item MTBF (hrs)
Memory 1,100,000
Motherboard 200,986
Fans 236,703
HDD 1,400,000
CPU 2,100,000
PSU 546,694
SSD 1,200,000
Intel 910 PCIe SSD card 1,100,000
10Gb I/O module 3,784,399
Riser Card 36,495,482

Like any good business, the ability to react and adapt quickly will give the best chance of survival in this dog eat dog world.

Mar
07

Horizon View: PCoIP Transport Header

There isn’t a good way to sex up this title! New Horizon View 5.2 documenation finally calls out a GPO setting that was there since 5.1, Configure the PCoIP transport header. It will be intresting to see what vendors will come out with features to take advantage of the this new header. My money is on Riverbed and F5, I guess we wait and see.
[Read more…]

Mar
01

Horizon View 5.2 & Lync 2013 Support – Are You Covered?

I think it’s amazing that VMware and Microsoft are working together to support Horizon View 5.2 and Lync 2013. From my understanding the two companies have only come together since last summer so this is great work at bringing the engineering teams together. The problem though for most enterprises that I see is the deployment of Zero Clients, which is will not supported for Lync 2013 when View 5.2 goes GA.

Lync 2013 features with VMware View desktops

Presence Supported
Instant Message Supported
Desktop Sharing Supported
Application Sharing Supported
PowerPoint Sharing Supported
Whiteboards Supported
File transfer Supported
Online meetings Supported
Office Integration Supported
Audio Supported (with Lync 2010, this used to only be supported via IP-Phone)
Video Supported (with Lync 2010, this was never supported)
Recording audio

Unsupported

You will have to deploy Lync software components on your virtual machines and client devices. So the client devices can only be Windows 7 or Windows 8 devices.

Lync 2013 and View 5.2

Different components of the VMware View and Microsoft Lync 2013 architecture

The Lync VDI plugin that runs on the Windows client needs to be at least have 1.5GHz CPU and minimum of 2GB RAM.

On your Lync Server 2013, ensure that EnableMediaRedirection is set to TRUE for all VDI users. As part of the Lync Server setup, make sure you generate a certificate and add this certificate to their Windows Client Machine. The certificate will need to be placed in the “Trusted Root Certificate Authorities” directory. The Lync VDI 2013 plugin will not pair up with the Lync 2013 client running inside the remote desktop if this step is not completed. You may also have to place a host file on the endpoint if the endpoint is not on the same domain as the Lync Server.

It’s also good point out you want to make sure the USB headsets and WebCams are not redirected to the VM. You want the traffic to go between the clients via the plugin.

Overall the Lync 2013 support is great but just make sure you know that zero clients are still not covered. Zero clients will still suffer from hair-pinning traffic in the datacenter and not get the benefits of WAN optimization for Voice traffic. Zero Clients can still work you just really need to nail down the Network Best Practices for PCoIP. Other clients will be added but it appears the ball is in Microsoft court. Microsoft still needs to provides it’s components to the other clients but Zero client support seem away down the road.