Apr
30

Simplicity & Choice for Private & Public Cloud Backup. #.NEXT with #Rubrik

As the inaugural .NEXT conference slowly creeps up and the speed and change in the industry is only continuing to accelerate. While CIOs, Operations managers, development teams and sysadmins struggle to keep pace an important ally has become the ability for the customers to demand choice and simplicity. Simplicity reduces pain and choice gives the feeling of safety.

While Steve Herrod was CTO at VMware he had coined the term Software Defined Datacenter. The ability for a workload to live anywhere has been dream for many years but now is a reality as other services like backup and disaster recovery have continued to evolve. It’s always been easy to spin up virtual machines but another to protect and manage them all.

Technologies from Nutanix and Gold sponsor of .NEXT, Rubrik highlights how customers are adapting to backup and disaster recovery needs. Simplicity comes in the form of the mentality of it “just works” across private or public backup solutions.

Rubrik being a virtualization only backup solution has many similarities to Nutanix. Both Nutanix and Rubrik provide simplicity with:
· Rapid deployment
· Fractional consumption based models
· Built web-scale technologies that are designed to scale on demand
· As you scale configuration complexity doesn’t increase
· Per VM backup to local and remote appliances and AWS
· Ability to create multiple backup schedules, SLAs & retention periods
· HTML 5 user interface to avoid plugins, reduce setup times and highly available control plane

When things “Just Work” you don’t have to be an expert in the technology and you can focus your time on how your business makes money. Backup and DR are a time sinks until unfortunately time comes when you need to use them. As public cloud back up uses cases it great for companies that don’t have to spend the money on keeping the lights on for backup\DR.

Some other similarities between the two companies are make up of the companies. There is a heavy DNA of google blood line in engineering, they share a common venture capital firm in Lightspeed Ventures. Both the CEO of Rubrik, Bipul Sinha and one of the founders of Rubrik Mark Leslie, sit on the board of Nutanix. If you’re thinking you like a full stack play, Nutanix can be your primary data fabric and Rubrik for your secondary fabric. Remember, friends not let friends do complex!

Nutanix & Rubrik

Check out the chassis, Top and bottom, notice anything similar?!

Choice is important because there is never 1 way to do things for 100% for everybody. When it comes to getting work done most of us have experience of being locked into the UI of vendor X. If it’s not in the UI you can’t change the workflow to meet your needs and then you start doing things that really consume a lot of time. Manually adding VM’s to backups it just one of those boring redundant tasks. In a large cloud environment you wouldn’t want to just backup every new virtual machine every 1 hr on the 1 hr but might want to do this say after X days once it been deemed important. This could be both accomplished by Nutanix or Rubrik via Rest API’s with ability to tie in your own business logic into the workflow.

Rest API’s represent the ultimate choice in how to run your infrastructure. When Marc Andreessen’s authored “Software is Eating The World” was about the industries being distributed by software. Both Nutanix and Rubrik share this view and have more news on how they’re both disrupting the datacenter at .NEXT.

Nutanix-.Next-User-Conference-Email_Signature

Aug
26

EVO RAIL: Status Quo for Nutanix

boatSome will make a big splash about the launch of EVO RAIL but the reality is that things remain status quo. While I do work for Nutanix and I am admittedly biased, the fact is that Nutanix as a company was formed in 2009 and has been selling since 2011. VSAN and now EVO RAIL is a validation of what Nutanix has been doing over the last 5 years. In this case, high tide lifts all boats.

Nutanix will continue to partner with VMware for all solutions, just like VDI, RDS, SRM, Server Virt, Big data applications like Splunk and private cloud. Yes we will compete with VSAN but I think the products are worlds apart mostly due to architectural decisions. Nutanix helps to sell vSphere and enable all the solutions that VMware provides today. Nutanix has various models that serve Tier 1 SQL\Oracle all the way down to the remote branch where you might want only a hand full of VM’s. Today EVO RAIL is only positioned to serve only Tier 2, Test/Dev and VDI. The presentation I sat in on as a vExpert confirmed Teir 1 was not a current use case. I do feel that this is mistake for EVO RAIL. By not being able to address Tier 1 which I would include VDI in the use case, you end up creating silos in the data center which is everything that SDDC should be trying to eliminate.

Nutanix Uses Cases

Some of the available Nutanix Uses Cases

Scale-Out

Nutanix is still King of Scale but I am interested to hear more about EVO RACK which still in tech preview. EVO RAIL in version 1.0 will only scale to 16 nodes\servers or 4 appliances. Nutanix doesn’t really have a limit but tends to follow hypervisor limits, most Nutanix RA’s are around 48 nodes from a failure domain perspective.

Some important differences between Nutanix and EVO RAIL:

* Nutanix starts at 3 nodes, EVO RAIL starts at 4 nodes.

* Nutanix uses Hot Optimized Tiering based on data analysis and cache from RAM which can be deduped, EVO RAIL uses caching from SSD(70% of all SDD is used for cache).

* You can buy 1 Nutanix node at a time, EVO RAIL only is sold with 4 nodes at a time. Though I think this has do with trying to keep a single sku. The SMB in the market will find it had to make this jump though. On the Enterprise side you need to be able to have different node types if your compute\capacity doesn’t match up.

* Nutanix can scale with different node types ranging in different levels of storage and compute, EVO RAIL today is a hard locked configuration. You are unable to even change the amount of RAM from the OEM vendor. CPU’s are only 6 core which leads to needing more nodes = more licenses.

* EVO RAIL is only spec’d for 250 desktops\100 general server VM’s per appliance. Nutanix can deliver 440 desktops per 2U appliance with a medium Login VSI workload and 200 general server VM’s when enabling features like inline dedupe on the 3460 series. In short we have no limits if you don’t have CPU\RAM contention.

Management

* Nutanix has 1 Storage Controller(VM) per host that takes cares of VM Cailber Snapshots, inline compression, inline Dedupe, Map Reduce Dedupe, Map Reduce compression, Analytics, Cluster Health, Replication, hardware support. EVO Rail will have a EVO management software(web server), vCenter VM, Log insight VM and a VM from the OEM Vendor for hardware support and vSphere replication VM if needed.

* Nutanix is able to have separation between compute and storage clusters. EVO RAIL is one large compute cluster with only storage container. By having separation you can have smaller compute clusters and still enjoy one giant volume. This is really just an issue of having flexibility on design.

* Nutanix can run with any license of vSphere, EVO RAIL license is Enterprise Plus. I am not sure how that will affect pricing. I suspect the OEM will be made to keep it at normal prices because if would affect the rest of their business.

* Nutanix can manage multiple large\small cluster with Prism Central. EVO RAIL has no multi-cluster management.

* Nutanix you get to use all of the available hard drives for all of the data out of the box. EVO RAIL you have to increase the stripe width to take advantage of all the available disks when data is moved from cache
to hard disk.

* Nutanix offers both Analysis and built in troubleshooting tools in the Virtual Storage Controller. You don’t have to add another VM in to provide the services.

Chad Sakac mentioned in one of his articles “my application stack has these rich protection/replication and availability SLAs – because it’s how it was built before we started thinking about CI models””, that you might not pick EVO RAIL and go to a Vblock. I disagree on the CI part. Nutanix has the highest levels of data protection today. Synchronous writes, bit rot prevention, all data is check summed, data is continuously scrubbed in low periods, Nutanix based snapshots for backup and DR.

It’s a shame that EVO RAIL went with the form factor they did. VSAN can lose up to 3 nodes at any one time which is good but in the current design it will need5 copies of data to ensure that a block going down will not cause data loss when you go to scale the solution. I think they should have stayed with a 1 node – 2 U solution. Nutanix has a feature called Availability Domains that allows us to survive a whole block going down and the cluster can still function. This feature doesn’t require any additional storage capacity to use the feature, just the minimum two copies of data.

mmmm
More information on Availability Domains can be found on the Nutanix Bible

Scale

* Nutanix can Scale past 32 nodes, VSAN is supported for 32 nodes but yet EVO RAIL is only supported for 16 nodes. I don’t know why they made this decision.

* Prism Element has no limits to the number objects that it can manage. EVO RAIL is still limited by the number of components. I believe that the limited hardware specs are being used to limit the number components so this does not become an issue in the field.

* Nutanix when you add a node you can enjoy the performance benefits right away. EVO RAIL you have to wait until new VM’s are created to make use of the new flash and hard drives(or a perform a maintenance operation). Lot of this has to do on how Nutanix controls the placement of data, data locality helps with this.

I think the launch of EVO RAIL shows how important hardware still is when achieving 5 9’s of availability. Look out dual headed storage architectures, your lunch just got a little smaller again.

Jun
19

Why #Webscale Reason 5: Brain Drain, Training Budgets & Turnkey Solutions

Companies like Google, Amazon and Facebook had to invent (code) new technologies and approaches to doing IT because no alternative to traditional IT existed. Lots of the technologies surrounding this can be complicated and does take a highly trained team to forge ahead. Web-scale is not an all-or-nothing proposition. Today we’ve reached a point where the principles and patterns are well-understood and turnkey enterprise-class solutions are emerging to bring web-scale capabilities to the enterprise. These don’t require PhDs to operate. Even some of the industry storage giants like EMC are trying to deploy similar technologies to provide true scale out technologies. Nutanix has been building upon these technologies since 2009 so people can do more with less. An IT Admin has the option of never leaving the Prism UI if they want.

Like it or not, enterprise IT is fighting with the cloud for relevance. Enterprise IT is not that way by choice. The politics and finger-pointing is what traditional infrastructure constraints and complexity have created. Budget constraints are all the more reason why you need an alternative. If you have the opportunity to learn one skill to save countless hours down the road is that not a fair price to pay? I remember an old boss questioning my VMware 3.0 training over the same things. Do I need it? Is it valuable? Many of the skills that were considered niche 5 years ago are now mainstream. Companies like Nutanix are eliminating the need for specialized talent by delivering turnkey solutions that are web-scale inside but provide enterprise capabilities, offering the best of both worlds.

The reason why VMware SRM was invited was so people could get out of the weeds of scripting and engineering their DR plans. When people changed jobs or left the company you wouldn’t have to be worried about the next lady/man stepping into fill their shoes and figure the failover process if a disaster were to occur.

With any new technology or paradigm shift there needs to be a way to bridge the gap between the two worlds. The difference between Public vs Private cloud in this case is learning a UI and hiding the complexity. Virtualization is a key aspect to Nutanix so a lot skills will work in the old and the new land of the datacenter.

Jun
16

Why #Webscale Reason #4: Machine Data & Analytics #Nutanix #Linkedin

When you open up your infrastructure up to API’s and have a platform to automate all aspects it allows a common management and analytic platform. Silo’s of infrastructure not only put additional strain not only for storage performance with the IO blender effect but also managing the wealth of data that is generated. Google’s ability to collect and analyze has changed the game for them. Having different hardware, different data centers and different use cases to contend with, it’s all about managing the whole story and seeing problems before they end up on your CIO’s dashboard. This can really only be with a shared nothing architecture.

Look at how LinkedIn is doing it. Similar aspects to the Nutanix design.

Want to learn more, great live info coming here.

Jun
13

Why #Webscale Reason #3: It’s about the people – #Twitter #APIgee #DataStax #Nutanix

It’s not all about wing dings and nuts & bolts. It’s easy to get lost in the weeds of technology and forget the greater purpose of why a IT department exists. When technology religion starts to dictate what is right for business it can easily turn into a dead end street. People and process are the hardest things inside of tech and where web-scale plays a part. Web-scale is about launch first, optimize later. Focusing what you’re good at and getting to the last 10% can be iterative process. It’s not about speeds and feeds, it’s about getting your teams to focus on the business and work together. It’s breaking down tradtional silos and helping move the needle. I believe the general sysadmin will have a long life ahead of them versus people that are totally focused in one area.

At Nutanix we have no religion on hardware. Today we OEM through Super Micro, tomorrow we could switch if the economics made sense, performance and form factor made sense.

Launch first has allowed Nutanix to get to MapReduce Dedupe (Post-Process) probably in one of the quickest fashions. It started with inline dedupe for performance, it was put into production and built upon work from out Medusa/Cassadonra team. Then MapReduce Dedupe came focusing on OS and application data. Over time more algorithms will be added to MapReduce Dedupe which will potentially lead to more features.

From a customer perspective launch first gives you more options to make a better descsion. This is another reason why hybrid cloud will succeed.

“If all you have is a hammer, everything looks like a nail”

Catch a live tech panel on Wednesday June 25th, 2014 – 10:00AM–10:45AM PDT

Designing and Building Web-scale Systems

Panel line-up:

Dmitriy Ryaboy (Engineering lead at Twitter)
Karthik Ranganathan (Engineer at Nutanix)
Anant Jhingran (CTO of APIgee, IBM Fellow)
Darshan Rawal (Director of Product Management, DataStax)

web-scale-wednesday-660x200

Jun
12

Why #Webscale Reason #2: System Design #Nutanix

Keeping things up to date in the Enterprise usually happens when a support contract is about to run out or when a new feature is needed. Teams are reluctant to make changes when outages need to be planned, load on the systems needs to low and possible manual intervention is needed throughput the process.

Web-Scale allows non-disruptive rolling upgrades, no single point of failure and minimal intervention. Self-defining (and versioned) disk format, like protobufs This way, upgrades of disk data can be done lazily. Web-scale can’t assume 1-shot data upgrades, given the scale that it could consume. This also inherently allows client/servers relationship to be a different versions throughout the process which is a pretty big accomplishment onto itself.

Join Web-Scale Wednesday to learn more and hear from Industry leaders like Dmitriy Ryaboy Engineering Lead at Twitter.

web-scale-wednesday-660x200

Jun
11

Why #Webscale Reason #1: The swipe card effect #Nutanix

Web-scale IT is making its way into the mainstream enterprise datacenter. In support of a joint event that Nutanix is supporting I will give my reasons why web scale is becoming common place.

Reason 1:

swipeClients\customers leave to public service providers when their needs aren’t be meet fast enough. Enterprise IT has lots of complexity and silos arounds networking, storage and compute which results in unpredictability with scaling out current environments. Do you have enough throughput available to add your next storage array? Do I have ports left on my fiber channel switch to add my next server? At the end of the day the business doesn’t care how this achieved. Adding storage or compute should be a non-event. It’s all about delivering services. Shrink or grow, 1 node, or 20 nodes it needs to happen at the pace of business.

On June 25th, Nutanix is celebrating web-scale IT with Web-Scale Wednesday, a live online event that will bring together IT leaders, industry experts and enterprise customers to share their perspectives and experiences adopting web-scale IT and bringing it to the enterprise. Speakers from Twitter, DataStax, Dell, Citrix and more across the industry. Join us