Are We Nearing the End of Blade Servers?

The future points toward higher wattage CPUs.  Does this point to an end for blade servers?  In this blog post, I’ll examine some real world power examples that may give us a clue.

Understanding The Problem

It all starts with CPU power

Before I begin, it’s important to determine where the blade server ecosystem power goes, so I went over to HPE’s Power Advisor and walked through a basic HPE Synergy design. To begin, I chose the ProLiant Synergy 480 Gen 10 .  I figured I would pick a blade server that would signify the type of blade server we should see in the future: small, 2 CPU, capable of handling large memory.  From there, I picked the Intel Xeon 8280L.  They don’t show the TDP on the individual CPUs in this calculator, so I went with the 28 core that supports large memory.  At this point, the CPU power, for 2 CPUs, showed 421 watts.

Next, look at memory and other server components

Moving to the memory, I went ahead and chose 24 x 128GB LRDIMMs.  This added another 256.8 watts (or 10.7 watts per LRDIMM.)  I chose to use 2 x 128GB M.2 devices for “booting”.  This added a measly 18 watts (9 watts per device.)  Lastly, I decided to add a network card.  So I chose a 25/50GB Mezzanine card which only added 22.2 watts.  To sum it up, for a single blade server with 2 x 28 core CPUs, 3TB of RAM and a pair of NICs, we’re at a total of 708 watts – not to bad, but there’s more to the equation.

Last look at the blade infrastructure power

I switched back to the chassis view in the power calculator that shows the aforementioned blade server.  I see that the power calculation went up to 1367.16 watts and I haven’t even added in the required network interconnects.  Once I added in a pair of 20Gb Interconnects, my power requirement goes up to 1402.95 watts.

So let’s break this down.  Removing the power calculation of the blade server shows that the chassis infrastructure draws 694.95 watts.  The HPE Synergy “frame” (chassis) supports 6 power supplies and I had to use a 2650W power supply.  In the QuickSpecs HPE lists support for 2950w power supplies.  Assuming a 3+3 power design, where 3 power supplies are sufficient to support the entire frame, this design has 8,850 watts of available power.  Removing the minimum power of the above mentioned infrastructure power of 694.95 watts, that leaves 8155.05 watts of power for the blade servers.

Putting it all together

With 8155 watts of available power and 12 server slots, we have an average limit of 679 watts per blade server.   This is enough for what I designed above.  As I add more servers, my options become more limited as the available power envelope shrinks.

Now, this example was using the Intel Xeon SP (Cascade Lake) CPU with a TDP of 205 watts per CPU.  What happens when Sapphire Rapids comes out next year and the Granite Rapids later this decade?  The TDPs will continue to increase well above the example of 205w shown above. This will equate to blade server vendors having to either offer much larger power supplies.  Or perhaps even limiting their CPU offerings within their blade server portfolio.   In addition, the memory architecture will move to DDR5, drawing more power so we’ll likely start to see limited memory quantities in an effort to support the limited power envelope.  Perhaps technologies like CXL or Gen-Z will help shift the power envelope outside of the blade server architecture.  If not, we may be at the beginning of the end of the blade server.

I’d love to hear your thoughts – leave a comment below if you think differently.  As always, thanks for reading and for your continued support.

Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of  He has over 20 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area.  He has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin has worked at Dell EMC since August 2011 is a Principal Engineer and Chief Technical Server Architect supporting the East Enterprise Region at Dell Technologies. He is also a CTO Ambassador in the Office of the CTO at Dell Technologies.


Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer. No compensation has been provided for any part of this blog.