With all that is made of the competition between blade server manufacturers and the growth of the blade server market in general, is there room for another type of condensed computing in the data center? Have we been going about things all wrong with regard to architecture design?
Nutanix thinks so.
Nutanix is a start-up company geared towards delivering a simplified virtualization infrastructure with a strong focus towards eliminating the need for a SAN. Their clustered solution brings storage and compute together which theoretically reduces expense, reduces complexity, and improves performance. On its own it doesn’t really seem that innovative but the secret sauce is how they make the cluster scale and tier/span data across all nodes without sacrificing performance. Each node has the usual compute resources plus a mix of local SSD and SATA hard disks. There are 4 nodes per 2u enclosure called a “block”. Add more blocks and you have a Nutanix cluster. The software stack scales and balances everything between the nodes and blocks. The technology originated from the architecture that companies like Google and Facebook employ in their data centers. Assuming that can be taken at face value, the scalability potential is phenomenal.
So what’s the big deal?
Well my thinking is that if you can eliminate the need for a SAN (for virtualization) then you can definitely eliminate the need for an enclosure of blade servers. No interconnects. No Enclosure. Simplified network architecture. No SAN. What’s not to love?
Eliminate the SAN. Really?
The 37 million dollar question that comes to mind is: Can I really eliminate my SAN? For me the answer is a quick and resounding, “No I cannot!” My SAN does a heck of lot more than deliver servers through a hypervisor. There are still application, database and file servers depending on a direct connection to that centralized beast of disk technology. Many of those servers may be virtualized but some of their data volumes are simply best served as LUNS and not ridiculously large VMDK files. Granted, the topic of VM data placement is debatable but it becomes moot if I have any physical servers around needing centralized storage. In my environment, and I feel confident I’m not alone in this, the SAN can’t really go away.
Specific to virtualized workloads, Nutanix claims that there is plenty of capacity for just about any scenario. Personally, I still doubt the clustered Nutanix nodes have the local capacity to absorb both the VMs and their data with redundancy. A single block (made up of 4 nodes) has some solid state goodness and 20TB worth of storage (5TB per node). As the cluster scales by adding blocks of nodes the increased pool of storage is available to all VMs. For some environments that may be more than enough capacity to contain everything related every virtual server. For other environments this may not even be close to the needed capacity. I also wonder what happens when I drink the Nutanix kool-aid and then later hit the capacity wall? I’d feel pretty silly running out of space and having to buy a full compute block just to increase storage capacity.
Regardless, it does seem possible that Nutanix could displace the SAN completely if your environment is 100% virtualized and capacity requirements aren’t too drastic. Realistically though, since most businesses are not 100% virtualized, this technology simply reduces the need for capacity expansion and the worry of performance tuning the SAN for VM workloads. That definitely has some value but how much depends on how virtualized you are and how much data your VMs are connected to (among other things).
Ok. So how about compute?
Today we have stacks of blade enclosures full of ESX hosts driving maximum efficiency of network and power. Or are they? Let’s say you have a HP c7000 full of half height BL460 servers. For every 10u of rack space you get 16 dual socket ESX hosts. With the Nutanix solution there are 4 hosts per every 2u block so that same 10u can hold 20 similarly configured physical hosts. Within that same 10u there is also a collective 6TB of Fusion IO, 6TB of SATA SSD, and 100TB of SATA disk. The density win clearly goes to the Nutanix solution.
From a networking perspective things are a bit less clear. Within the HP enclosure, using Flex-10 virtual connect, there would be two or more 10Gb uplinks per enclosure (16 physical hosts). For each pair of uplinks one would be active with the other ready for fail-over. You could get away with a single pair of 10Gb uplinks but that would be silly for a full enclosure of hosts. I’d argue that more than four uplinks would likely be needed but that’s a different conversation. The main point is that there is some flexibility and design decisions as to how the network is configured.
By contrast the Nutanix has four 10GB uplinks per 2u block (one for each node within the block). Comparing the same 10u rack space you end up with 20 10Gb uplinks. Nearest as I can tell there is no aggregation of the 10Gb interfaces. From a pure performance perspective you can’t argue with the simplicity and potential of a dedicated 10Gb connection per host. On the other hand that is a huge number of 10Gb ports and certainly increases the cost of the networking infrastructure. The single interface per node also doesn’t give any option for fail-over. At this point I’m not entirely clear on what all of the networking capabilities are because Nutanix literature focuses purely on all of the SAN oriented benefits and I haven’t yet connected with them to follow up. I’d like to learn a bit more about their networking architecture before I declare an advantage to one or the other. I’ll report back what I find out.
Nutanix has a pretty good start on a new way of delivering virtualization. The idea of consolidating compute and storage isn’t entirely new (Pivot3, for example, has had VMs running on their storage arrays for a couple of years now) but Nutanix has absolutely evolved the concept. If they can deliver on all of their promises for less money than “the big guys” they stand a reasonable chance at being disruptive to both storage and blade center markets in the niche of virtualization.
One area where I can see this strategy really taking off is with VDI deployments. Who wouldn’t want a dedicated stack of gear running at high performance with plenty of capacity to spare? The total value of this technology,however, depends on a number of unanswered questions both technical and financial. Assuming it is as cost competitive as it is claimed to be, certainly green field environments should take a serious look and determine if a dedicated purpose built virtualization cluster has more value than a more complex but more versatile traditional storage and compute approach. Those that have existing storage and compute stacks will need to do a much more comprehensive evaluation because there’s a lot more to the value of “traditional” data center technology than consolidated server workloads.
Nutanix is cool, but I’m not replacing all of my blade servers just yet.
To find out more information about Nutanix click here.
To find out more about HP blade servers click here.
Chris is a contributor and author for BladesMadeSimple.com and in his day job is a Systems Architect for Clackamas County in Oregon and an HP customer. The opinions expressed in his writings do not necessarily reflect those of BladesMadeSimple or Chris’ employer. Follow Chris on Twitter @sysgeekguy.