Sep
25

Maximize Your ROI and Agility with Big Data #Nutanix #Docker #BlueData

Separate out your data from your compute for more agility.

The datanode is what is used to build out the HDFS. Typically the the dataNode and the nodeManager are co-located on the same host whether its physical or virtual. The NodeManager is responsible for launching and managing containers that are scheduled from the Resource Manager. On Nutanix if you virtualize the dataNode and the nodeManager on separate virtual machines you have the opportunity to increase your agility. The agility comes from the ability to use your resources to the max of your capacity at all times. When the the cluster isn’t in use or as busy, other systems have the opportunity to use the resources. You can shut down the NodeManager since they’re not responsible for persisting data and make the the CPU and memory available for another project like Spark or maybe a new machine-learning program someone wants to test out.

Hold the phone! What about data locality? You are correct performance is going to take a hit. Performance may drop from up to 15% from the standard way but if your system is only busy 30% of time it might be more that worth it. Let’s say a job takes 60 minutes to complete. Using this new model of separating out compute and storage, the job may now take 70 minutes to complete. Is the extra 10 minutes worth the agility to use your hardware for other projects? I think so but that is going to depend on your business requirements of course.

On the data locality side, the datenode still gets to benefit from reading locally. It’s data path on the network isn’t going to cause more stress so that’s a plus. Also the nodeManager is busy writing all of the temporary and shuffle data locally so that is also not going to cause any additional stress compared to having the nodeManager write to a remote shared storage device. Also in some cases the NodeManager will still talk to the local datanode over the local hypervisor switch.

If your after some real flexibility you could look at using BlueData to run Docker containers along side the dataNodes. BlueData will take over for the nodeManager essentially. Install some CentOS VMs that fit inside the hosts NUMA node and install BlueData. BlueData can help with QofS for different tenants, allow you to run different versions of Hadoop distros, Spark, Kafka and son on without blowing out your data requirements. BlueData also helps to maximize the remote connection between the containers and HDFS distro of choice.

If your after more agility, avoiding separate hardware for projects, getting better ROI for systems that run only weekly, monthly, quarterly or better testing methodologies this may be the right architecture for you to try out.

Speak Your Mind

*