Can You Run GPUs on Blade Servers?

In the past, a Graphics Processing Unit (GPU) was equated to a workload that was designing something, like building automobiles.  However, over the past few years, organizations have realized that GPUs have more value than utilization in Artificial Intelligence (AI) or Machine Learning (ML).  In fact, a large majority of recent GPU adoption revolved around utilization of GPUs with Virtual Desktop Infrastructure (VDI).   As we look at a “modular infrastructure” (aka blade server) environments, having multiple servers within a small footprint is ideal for VDI.  In today’s blog post, I’m going to review what each blade server vendor offers for GPU options.

Cisco UCS
Going alphabetical, Cisco’s GPU options include both the use of integrated GPU mezzanine cards and expansion through a PCIe node.

Model # of GPUs Supported Supported GPUs
UCS B200 M5 2 Cisco NVIDIA P6 
UCS B480 M5 4 Cisco NVIDIA GRID P6 
UCS X210c M6 2 NVIDIA T4
NVIDIA A16
NVIDIA A40
NVIDIA A100
UCS X440 PCIe Node 4 NVIDIA A100 Tensor Core
NVIDIA A16
NVIDIA A40
NVIDIA T4 Tensor Core

Dell Technologies
The Dell PowerEdge MX architecture currently relies on 3rd party support for GPU support.  One is through Amulet Hotkey.  Amulet developed a shelf that fits within the Dell PowerEdge MX Fabric B that supports up to 8 GPUs.  When you factor in a GPU shelf in Fabric B1 and B2, it offers up to 2 GPUs per MX740c or MX750c blade server (or 4 GPUs per MX840c.)  The most interesting part of this design is that it utilizes the PCIe version of the NVIDIA T4 GPUs – a more current and robust GPU model from NVIDIA.

The other offering is from Liqid.  I blogged about it in detail back in 2020, so check it out if you want specifics, but in general, an individual Dell blade server physically connects to Liqid’s PCIe expansion chassis that holds up to 20 full height, full length GPUs.  (Note – there is an updated design coming with this solution, so I’ll be posting about it as soon as I get details.)

HPe
To be completely transparent, I’ve tried to interpret HPE’s QuickSpec documentation on how GPUs are structured, but it doesn’t make sense.   It appears to me that you’ll need to use a PCIe expander if you are using GPUs on HPe Synergy.  If you know with certainty, or know where there is content that helps clarify, please reach out to me on Twitter or via email (bladesmadesimple AT gmail.com)  and I’ll correct this section.

Model # of GPUs Supported Supported GPUs
HPE Synergy 480 Gen 10 Up to 4 with PCIe Expansion Module
NVIDIA T4
NVIDIA A40
HPE Synergy 480 Gen 10 Plus Up to 4 with PCIe Expansion Module
Up to 8 Small Form Factor* w/ PCIe Expansion Module
NVIDIA T4*
NVIDIA A2*

NVIDIA A10
NVIDIA A40

Lenovo

I don’t think Lenovo has any GPU support on their blade servers.  I dug into their GPU summary PDF and it doesn’t appear to show support.

Conclusion

In summary, the options for GPUs on blade servers are very limited.  For integrated GPUs, Cisco has the advantage, followed by HPe (at the sacrifice of a blade server slot.)  For non-integrated GPU, Dell’s MX is going to be your best option.  Unfortunately, the reality is that not workload fits on a blade server.  Perhaps if GPUs are needed in your environment, you should consider rack servers.  Yes, it’s blasphemous since this is a “blade server blog” but the reality is, sometimes rack servers are a better choice.

Kevin Houston is the founder of BladesMadeSimple.com. With over 24 years of experience in the x86 server marketplace Kevin has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware virtualization. He has worked at Dell Technologies since August 2011 and is a Principal Engineer supporting the East Enterprise Region and is also a CTO Ambassador for the Office of the CTO at Dell Technologies.  #IWork4Dell

 

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer. No compensation has been provided for any part of this blog.