GPUs and Blade Servers – A Good Idea?

UPDATED 9.23.2016 When looking at the blade server market, the options for GPUs are very limited – in fact only a couple of vendors offer them (more on this later.)  So you have to question, is it a good idea to put a GPU in a blade server?  Let’s review and find out.

GPU Use Cases

One of the first things I wanted to find out is “why” someone would want a GPU.  I’m not going to pretend to be an expert on GPU Acceleration so I asked my peers within Dell EMC for their observations on where GPUs are being used (regardless of form factor.)  Aside from the obvious virtual desktop and HPC applications, other use cases included the following:

  • VDI Workstations
  • machine learning
  • financial calculations/modeling
  • OpenGL, etc. SolidWorks (education industry)
  • AutoCAD (education industry)
  • ArcGIS (education industry
  • graphics editing
  • video editing (broadcast industry)
  • 2D/3D modeling
  • Data analytics

As a side note, I found it very challenging to review what the adoption rate of GPU Accelerators are in servers, as most market data is focused on the PC / Gaming market.  Even reviewing NVidia and AMD market share doesn’t help since they don’t break out server-based GPUs – at least not that I could find.

GPU Options in the Blade Market

If you own blade servers, you know that the likelihood of finding a GPU on a mezzanine card  is very slim.  From my research, none of the Tier 1 blade server vendors (Cisco, Dell EMC, HPE, Lenovo) offer a snap-in GPU card, however there are dedicated models offering GPU accelerators:

  • HPE ProLiant WS460c Gen9 Graphics Server Blade – built on the ProLiant BL460 Gen 9, this server comes with several options for GPU, including an option to put up to 3 GPUs  per server (48 per ProLiant C7000 chassis) via an expansion slot that occupies a server bay.  Based on the QuickSpecs, the available GPU options include:
    • AMD FirePro S7100X graphics
    • NVIDIA Tesla M6 graphics
    • NVIDIA Quadro K3100M graphics (Single card configuration only)
    • AMD FirePro S4000X graphics (Single or dual cards configuration capable)
    • NVIDIA GRID K1GPU adapter (via expansion blade only)
    • NVIDIA GRID K2 GPU adapter (via expansion blade only)
    • NVIDIA Quadro M5000 (double-width PCIe x16 in graphics expansion blade)
    • NVIDIA Quadro M6000 (double-width PCIe x16 in graphics expansion blade)
    • NVIDIA Quadro K6000 (Double-width PCIe x16 in graphics expansion blade)
  • HPE Synergy 480 Gen9 Compute Module – no, I don’t know when this will start shipping, however their QuickSpec document advertises a NVIDIA Tesla M6 graphics adapter as an option, although it’s not clear on how many will fit in a single compute module.
  • Amulet Hotkey DXM630 – this is a unique offering.  Dell EMC doesn’t offer a dedicated blade server for GPU use, however one of their partners, Amulet Hotkey does.  This server is built on the PowerEdge M630 blade server, but Amulet Hotkey has added their secret sauce capability to support up to 2 (or 32 per PowerEdge M1000e chassis) of the following GPUs:
    • AMD FirePro S4000X
    • NVIDIA Quadro K2200M
    • NVIDIA Quadro K5100M
  •  UPDATED – Dell EMC PowerEdge VRTX with M630 Blade Servers – I’m embarrassed to admit that until I received a comment (thanks Bruno Daubié) I had forgotten about Dell EMC’s VRTX platform for GPU use.  The VRTX chassis can support three single-wide GPGPU cards at 150W or One double-wide GPGPU card at 225W + one single-wide GPGPU at 150W) which would then be mapped to 1 to 3 servers.  The VRTX chassis is 5U, so you’re not going to get a dense offering, BUT it is technically a GPU option for blade servers.  Here are the GPU options available:
  • AMD W7000
  • NVIDIA GRID K2
  • UPDATE 2 Cisco UCS B200M – reader, Scott Garee, pointed out that Cisco does have a GPU option.  This is an MXM form factor GPU on a Mezzanine riser, so can be deployed at one GPU per blade.  The GPU offered is listed as “M6 GPU”, so I’m assuming it’s a NVIDIA.

NOTE: Since there may be configuration limits and additional license requirements on some of the above offerings, so please contact the vendor for specifics.

Additional note: I could not find any options for Lenovo.  If I missed something, let me know.

GPUs and Blade Servers – A Good Idea?
Now to the point of the blog post.  Is it a good idea to run GPUs on blade servers?  Well, that’s a loaded question.  Like most situations, it depends – so let me walk through the pros and cons.

the pros

  • fits within your existing blade footprint therefore it comes with all of the benefits of using blade servers (easy to replace, space savings, yada, yada, yada.)

the cons

  • could (and probably will) drive up power usage
  • limits your ability to re-provision.  What I mean by this is that in both cases of blade server GPU availability, the option is an “appliance-like” design that isn’t meant to be re-used.  Trading out a mezzanine card and turning it into a “normal” server probably isn’t supported
  • options for GPUs are limited compared to what is available in the rack server market, so users have to make what is available work for their use cases
  • the future.  Before I clarify this, I’ll state I don’t know anything definitive about future Intel server CPUs.  Now that the CYA statement is covered, from what I’m hearing about the next generation Intel servers is that they will have more cores and draw more power.  This will likely put some design limitations on future GPU appliance blade server models.  Again – just a speculation.  I know nothing.  If this is true, it will put users in a position of having to move to another server model type for their GPU needs.

I’m trying to be unbiased here but based on what I can think of, there are more cons with using GPUs on blade servers than pros.  So what’s the alternative?  Of course, a traditional 2U rack server with high performing GPUs is an obvious option, but an alternative dense option is the 1U rack server with 4 x GPUs like the 1 U PowerEdge C4130 (shown on the left.)  Wnot_a_blade_server_poweredge_c4130hile it may not be as dense as the HPE ProLiant WS460c, it could potentially offer more compute and memory per rack unit since each server would have 2 x Intel Xeon E5-2600 v4 CPUs and up to 1TB of RAM.  While it’s not likely to change the world, it’s food for thought.

 

Let me know if you think I’ve missed some points, some key technologies, or anything related to GPUs and blade servers in the comments below or on Twitter (@Kevin_Houston.)  Thanks for reading.

 

Kevin Houston - Founder, BladesMadeSimple.comKevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 20 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin has worked at Dell EMC as a Server Sales Engineer covering the Global Enterprise market since August 2011.

 

Disclaimer: The views presented in this blog are personal views and may or may not reflect any of the contributors’ employer’s positions. Furthermore, the content is not reviewed, approved or published by any employer.

 

 

 

4 thoughts on “GPUs and Blade Servers – A Good Idea?

  1. Bruno Daubié

    You missed an option within the Dell EMC portfolio. You can add up to 3 GPGPU in a VRTX châssis, and map them to the M620/M630 servers inside. This is mainly used for VDI desktop acceleration.

  2. Kevin Houston

    Thanks, Bruno! I totally missed that. I’ll chalk it up to being a long day and a late night… Any how, I’ve updated the blog post to reflect it. I appreciate you pointing it out and for reading the blog.

  3. Scott Garee

    You also missed the option for the Nvidia M6 on the Cisco UCS B200M4 blade.
    See the spec sheet at http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/b200m4-specsheet.pdf

    This is an MXM form factor GPU on a Mezzanine riser, so can be deployed at one GPU per blade.

    There are customers who have built a blade based architecture who prefer the option to include GPU without deviating from their chosen platform (even as easy as it is to do so with UCS.) You covered the use cases thoroughly.

Comments are closed.