Are Multi-Node Systems a Better Option Than Blade Servers?

Lately I’ve been having discussions around “futures” with customers and one topic continues to be over ongoing interest.  What will blade servers look like with the expectations of CPU TDPs continuing to increase?  Looking into the future, it’s very clear that CPUs are going to create an obstacle for blade server users.  TDPs are going to grow to the point where “air cooling” may not be sufficient for all workloads.  This will force blade server users to make a choice. 

What Are Your Choices?

Some of the options you’ll have are: move to liquid cooling, choose a lower wattage design, or move off of blade servers.  For this discussion, I want to focus on the latter.  One of the primary reasons that people have blade servers in their datacenters is density.  As mentioned above, the forthcoming challenges that CPUs will bring in cooling blade servers is likely to reduce the number of systems you can fit into a rack.  With that in mind, it may make sense to look at other form factors, like “Multi-Node” systems.

What Are Multi-Nodes?

When I talk about multi-nodes, I’m referring to a 2U system that can hold 1 – 4 compute nodes.  They are similar to blade servers in that they share power and cooling (and maybe management.) But, they are different to blade servers in that each server supports its own storage and networking.  Some examples (not an all inclusive list) of multi-node servers are:

  • Cisco UCS 4200 w/C125 server nodes
  • Dell Technologies PowerEdge C6525
  • Dell Technologies PowerEdge C6520
  • HPE Apollo 2000 Gen10 Plus

Key Advantages of Multi-Nodes

Instead of trying to do a side-by-side of what multi-node systems offer compared to blade servers, I’ll give you my opinion on each component.


The 2U form factor seen in mult-node systems offer a smaller “fault domain.”  In otherwords, if there are any sort of issues, like power or networking failure, only a fraction of servers are impacted compared to a full chassis of blade servers.  In addition, the 4 server nodes-in-1 chassis design offers a greater density than using 1U servers.


This is one of the main areas that I think multi-nodes will have an advantage.  There are two major gaps in the blade server market: there are no mainstream Tier 1 blade servers with AMD EPYC and there are no blade servers built for only 1 CPU.  These sort of go hand-in-hand.  AMD has a strong 1 CPU server marketshare, yet we don’t have any blade servers with AMD.  However, the multi-node market is flooded with servers/nodes with AMD EPYC.  There are a couple of reasons that I think this is important.

A quick Google search shows that AMD not only has the largest CPU core offering in the x86 market, but their roadmaps show that trend continues.  One of the ways to reduce the cooling issues that I mentioned above we’ll likely see in the datacenter is to reduce the number of CPUs you have on a server.  Therefore the multi-node design could not only offer more cores (as well as possibly more memory) but it could also offer lower TDP hence fitting within the “air cooled” window for cooling in the datacenter.

One last advantage for the node-based servers found in multi-node systems is the support for liquid cooling.  I can’t predict the future, but today blade servers don’t support liquid cooling.  Today, multi-node servers do.  Dedicated liquid cooled heat sinks married with a sled design that allows for connectivity into a sealed rack cooling ecosystem =  pretty cool stuff (no pun intended.)


As you look at networking / external storage connectivity on multi-nodes, it could be an advantage or it could be percieved as a disadvantage.  Since every server has its own I/O connectivity, you don’t have to have the same I/O across all servers.  You see, in blade servers, you typically have a fabric that is used across the blade servers.  The blade servers either use that fabric or they don’t.  You can’t mix and match within that fabric.

Alternatively with multi-node servers, if you want one node to have a 100G card on one server and an Infiniband card on another server, you can.  Of course, the trade-off for each server having its own networking is the cable complexity.  You won’t have the consolidation of ports / cables that you find in blade servers.


Multi-nodes offer a nice blend of rack server characteristics and blade server advantages.Multi-node systems are similar to blade servers yet offer unique advantages like AMD, 1 CPU versions and provide a smaller fault domain.  I think the multi-node has a greater potential of supporting a larger stack of future CPUs (with or without liquid cooling.)  I’d like to know if you agree or if you have different thoughts – feel free to comment below.  Thanks for reading!



Kevin Houston is the founder and Editor-in-Chief of  He has over 23 years of experience in the x86 server marketplace.  Kevin has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware virtualization.  He has worked at Dell Technologies since August 2011 and is a Principal Engineer supporting the East Enterprise Region and a CTO Ambassador in the Office of the CTO at Dell Technologies.


Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer. No compensation has been provided for any part of this blog.