Details on the Dell PowerEdge M420 Blade Server

Dell officially released the PowerEdge M420 blade server today, so today, I’ll provide you with some details about how it works – especially in regards to connectivity to the IO Modules.  If you are interested in blade server density, you’ll want to read this post.

At only 97.5mm (3.83 inches) tall, the PowerEdge M420 is the industry’s first “quarter-height” blade server.  Each PowerEdge M420 blade server has up to 2 x Intel Xeon E5-2400 CPUs (up to 8 cores, 2.3Ghz), 6 DIMM slots (for up to 96GB of memory; 1600Mhz), 2 internal SD slots, a Dual Port 10Gb Broadcom 57810s onboard NIC, and 1 x mezzanine expansion.  Up to 32 x M420 blade servers will fit within the M1000e chassis making it a very dense blade offering.

Internal View - Dell PowerEdge M420

 

imageIf you are wondering how you can get a quarter-height blade into an M1000e chassis, the answer is “you can’t”… without a sleeve.  The sleeve is a full-height module that holds up to 4 x M420 blade servers.  The sleeve acts as an extension of the M1000e midplane allowing the M420’s to connect to the infrastructure.  Only one sleeve is required per 4 M420 blade servers and can be  ordered with your M1000e chassis for a few Benjamin Franklins.  Although the sleeve occupies a full height slot (1/8th of the M1000e chassis), it does not prevent you from placing a half-height server like the PowerEdge M620 next to it.  If, however, you wanted a chassis full of M420 blade servers, you would need 8 sleeves to hold them all.

Understanding the I/O Connectivity

To be honest – understanding how the M420 connects to the chassis I/O modules can be a bit tricky, so I’ll try and make it simple for you.  Here’s a quick decoder ring for you to reference.  The names like “blade a” refers to the picture above.

image

 

Fabric A

If you are not familiar with how Dell’s blade servers typically connect to the M1000e blade chassis, please check out one of my earlier posts.  Traditionally, Dell blade servers’ onboard network adapters map to Fabric A with the first port going to the I/O Module in A1 of the M1000e chassis and the 2nd network port going to the module in Bay A2.  This is not the case with the PowerEdge M420.  While they still connect to Fabric A, the onboard network adapters (LOMs) on the M420 alternate how they connect.  Here’s what this fabric looks like:

image

A few things to note with Fabric A.  First – for full functionality, a 32 port Ethernet Module will be required.  As previously noted, the M1000e chassis can hold up to 32 M420 blade servers, so if you place an Ethernet switch designed to work with 16 servers into the M1000e chassis, only one of the 2 onboard Ethernet ports on each M420 will be active.  For example, if you use a pair of Dell PowerConnect M8024-k modules in Fabric A, each M420 blade server will have 1 x 10GbE port active.  In contrast, if you use a pair of Dell PowerConnect M6348 Ethernet Modules, each server will have both 10Gb ports available for use (as 1Gb NICs).

The PowerEdge M420 comes with a Dual Port 10Gb Broadcom 57810s onboard NIC capable of running at 1GbE, or 10GbE.  Features of the Broadcom 57810s include high performance hardware offload for iSCSI and FCoE-ready storage protocols as well as NPAR, the ability to partition the 10GbE port into 4 partitions each with different personalities.  Check out this previous blog post for details.  Full details on the Broadcom 57810s can be found at http://www.dell.com/us/business/p/broadcom-57810s-dual-port/pd.

Fabrics B and C

Fortunately, Fabrics B and C are pretty straight forward – sort of.  Since the PowerEdge M420 blade server has 1 mezzanine card, it does not have the need to connect to both Fabric B and C at the same time, so the engineers at Dell came up with a creative way to give each M420 connectivity to the I/O modules.   If you read my decoder key above,already you’ve probably already figured out the secret, but in case you did not, here’s a view of how it works:

image

If you had Ethernet modules in Fabric B and Fibre modules in Fabric C, only 2 of the 4 M420 blade servers in a sleeve would need to have a fibre mezzanine card (the other two would need an Ethernet mezzanine card.)  In a chassis full of M420s, 16 could have additional Ethernet ports in the mezzanine card slot, and 16 could have Fibre ports.

Here are the current mezzanine card options for the PowerEdge M420:

  • Intel i350 Quad Port 1Gb
  • Intel® X520-x/k 10Gb Dual Port
  • Broadcom 57810-k Dual port 10Gb KR CNA
  • Broadcom 5719 Quad port 1GBE
  • Brocade BR1741M-k Dual Port 10Gbps CNA
  • Qlogic QME8262-k Dual port 10Gb KR CNA
  • Emulex LPE1205-M 8Gbps Fibre Channel
  • Qlogic QME2572 8Gbps Fibre Channel

You can find out more about these mezzanine cards, as well as the I/O Modules available for the Dell PowerEdge M1000e blade chassis at http://www.dell.com/us/enterprise/p/poweredge-mseries-blade-interconnects.

 

Dell PowerEdge M420 – By the Numbers

When you look at the numbers of what you can do with the Dell PowerEdge M420, it’s definitely suited for environments needing a lot of CPUs:

 

image

image

One thing to note – I’ve listed that the maximum memory on the M420 is 96GB.  This is using 16GB DIMMs across the 6 available DIMM slots.  When the 32GB DIMM becomes available, the maximum memory per M420 will increase to 192GB (which amount to 6TB per M1000e and 24TB per 42u rack.)

Additional Resources

Finally, I’ll leave you with some places to find additional information about the PowerEdge M420. 

 

Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com.  He has over 15 years of experience in the x86 server marketplace.  Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization.  Kevin works for Dell as a Server Specialist covering the Global 500 East market.

29 thoughts on “Details on the Dell PowerEdge M420 Blade Server

  1. Ed Swindelles

    Thanks for all of the detail, I was wondering how the mezz slots worked on these.

    What do you think the target applications would be? I can see some forms of HPC benefiting.

  2. Pingback: Kevin Houston

  3. Pingback: Ed Swindelles

  4. Pingback: Ed Swindelles

  5. Pingback: Marc Schreiber

  6. Pingback: Simon Grills

  7. Pingback: Didier Wenger

  8. Pingback: Richard Nicholson

  9. Kevin Houston

    Thanks for the comment about #Dell M420 workloads.  It was designed for HPC, cloud and other high node density uses where node count and computational density are more imperative than node scalability.  Other that that, you may find that it can be used for virtualization (with 2 x CPUs, 96GB RAM @ 1333MT/s and 40GB I/O); or you may find it useful for the edge of network (i.e. web servers.)  Thanks for reading!

  10. Andreas Erson

    So in essence – to utilize all 10GbE ports on Fabric A you need the upcoming Force10 MXL I/O-modules which has 32 internal 10GbE ports per module.

  11. Kevin Houston

    Thanks for the comment, Andreas.  Yes, to have all 10GbE ports on each M420 active, you’ll need to use the Dell Force10 MXL Ethernet switch module (due to come out mid-summer.)

  12. Kevin Houston

     Expand Thanks for the comment, Andreas.  Yes, to have all 10GbE ports on each M420 active, you’ll need to use the Dell Force10 MXL Ethernet switch module (due to come out mid-summer.)

  13. Pingback: Kong Yang

  14. Pingback: Omar Bouattay

  15. Kevin Houston

    Dan – thanks for the question.  The M420 sleeve is removable, like a blade server.  Dell some great videos of this at http://youtu.be/NLU45mGMeN4.  Regarding the question about LOM 1 – if all of the LOM port 1 on the M420’s went to one I/O module in Fabric A, then only half of M420’s would have LOM Port 1 connected if you used a 16 port switch (like the Dell PowerConnect M6220) therefore Dell engineering designed it so that the LOM port 1 are split where half go to I/O Module A1 and the other half go to A2.  Hope that makes sense.  Thanks for reading!  Appreciate your help.

  16. Pingback: Kevin Houston

  17. Pingback: Sarah Vela

  18. Pingback: Kevin Houston

  19. Pingback: Dennis Smith

  20. Pingback: DellTechCenter

  21. Pingback: Jaime Eduardo Rubio

  22. Pingback: Kevin Houston

  23. Pingback: Ryan Devereaux

  24. Pingback: DellテックセンターJapan

  25. Pingback: Masahiko Max Koizumi

  26. Pingback: SETAKA Takao

  27. Pingback: Jas

Comments are closed.