In the past, a Graphics Processing Unit (GPU) was equated to a workload that was designing something, like building automobiles. However, over the past two or three years, organizations have realized that GPUs have more value than utilization in Artificial Intelligence (AI) or Machine Learning (ML). In fact, a large majority of GPU adoption revolves around utilization of GPUs with Virtual Desktop Infrastructure (VDI). Organizations have realized that VDI running Windows 10 can benefit from GPUs, so GPUs are now becoming a requirement for VDI. As we look at a “modular infrastructure” (aka blade server) environment, having multiple servers within a small footprint is ideal for VDI. Therefore, in today’s blog post, I’m going to review what each blade server vendor offers for GPU options.
As I typically do, I’m gonna go in alphabetical order, therefore our friends at Cisco get first listing. If you’ve ever read my blog in the past 8 years, you know I work for Dell Technologies – so I’m no expert on Cisco. However, it doesn’t take a rocket scientist to Google “Cisco UCS GPU” to reveal Cisco’s current GPU strategy on their blade servers (I emphasize this because the Cisco UCS C-series is rack based, which does not fit my theme…) Cisco does a fantastic job promoting what they support. Based upon the UCS B200 M5 Blade Server Spec Sheet from a week ago, Cisco’s blade servers supports the NVIDIA P6 . The B200 M5 supports 1 in the front and/or 1 in the rear, for a total of 2 GPUs and the B480 M5 supports a total of 4 GPUs.
The PowerEdge MX architecture currently relies on 3rd party support for GPU via Amulet Hotkey, however, they’re doing something really unique. Amulet developed a shelf that fits within the Dell EMC PowerEdge MX Fabric B. The shelf contains up to 8 GPUs, so when you factor in a GPU shelf in Fabric B1 and B2, it provides up to 2 GPUs per MX740c blade server (or 4 GPUs per MX840c.) The most interesting part of this design is that it utilizes the PCIe version of the NVIDIA T4 GPUs – a more current and robust GPU model from NVIDIA.
This architecture uses a mezzanine slot on each blade server, but also has the option of accessing a PCIe card in an “expansion module”, however that device takes up a slot, so it can limit the amount of GPUs used. Here are the options HPe supports:
- NVIDIA Quadro M3000SE (mezzanine slot)
- NVIDIA Tesla P6 (via mezzanine slot or Multi MXM PCIe Expansion Module
(NOTE – please correct me if any of this is correct. HPe’s public information is VERY confusing)
I don’t think Lenovo has any GPU support on their blade servers. I dug into their GPU summary PDF and it doesn’t appear to show support.
In summary, the options for GPUs on blade servers are very limited. Not every workload fits on a blade server. Perhaps if GPUs are needed in your environment, you should consider rack servers. Yes, it’s blasphemous since this is a “blade server blog” but the reality is, sometimes rack servers are a better choice.
Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com. He has over 20 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin has worked at Dell EMC since August 2011 as a Server Sales Engineer covering the Global Enterprise market from 2011 to 2017 and now works as a Principal Engineer and Chief Technical Server Architect supporting the Central Enterprise Region at Dell EMC.
Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer. No compensation has been provided for any part of this blog.
Pingback: A Slick New Way to Get More GPUs on Blade Servers - Blades Made Simple
Could you please update these informations for 2021 hardware ?