Dell officially released the PowerEdge M420 blade server today, so today, I’ll provide you with some details about how it works – especially in regards to connectivity to the IO Modules. If you are interested in blade server density, you’ll want to read this post.
At only 97.5mm (3.83 inches) tall, the PowerEdge M420 is the industry’s first “quarter-height” blade server. Each PowerEdge M420 blade server has up to 2 x Intel Xeon E5-2400 CPUs (up to 8 cores, 2.3Ghz), 6 DIMM slots (for up to 96GB of memory; 1600Mhz), 2 internal SD slots, a Dual Port 10Gb Broadcom 57810s onboard NIC, and 1 x mezzanine expansion. Up to 32 x M420 blade servers will fit within the M1000e chassis making it a very dense blade offering.
If you are wondering how you can get a quarter-height blade into an M1000e chassis, the answer is “you can’t”… without a sleeve. The sleeve is a full-height module that holds up to 4 x M420 blade servers. The sleeve acts as an extension of the M1000e midplane allowing the M420’s to connect to the infrastructure. Only one sleeve is required per 4 M420 blade servers and can be ordered with your M1000e chassis for a few Benjamin Franklins. Although the sleeve occupies a full height slot (1/8th of the M1000e chassis), it does not prevent you from placing a half-height server like the PowerEdge M620 next to it. If, however, you wanted a chassis full of M420 blade servers, you would need 8 sleeves to hold them all.
Understanding the I/O Connectivity
To be honest – understanding how the M420 connects to the chassis I/O modules can be a bit tricky, so I’ll try and make it simple for you. Here’s a quick decoder ring for you to reference. The names like “blade a” refers to the picture above.
If you are not familiar with how Dell’s blade servers typically connect to the M1000e blade chassis, please check out one of my earlier posts. Traditionally, Dell blade servers’ onboard network adapters map to Fabric A with the first port going to the I/O Module in A1 of the M1000e chassis and the 2nd network port going to the module in Bay A2. This is not the case with the PowerEdge M420. While they still connect to Fabric A, the onboard network adapters (LOMs) on the M420 alternate how they connect. Here’s what this fabric looks like:
A few things to note with Fabric A. First – for full functionality, a 32 port Ethernet Module will be required. As previously noted, the M1000e chassis can hold up to 32 M420 blade servers, so if you place an Ethernet switch designed to work with 16 servers into the M1000e chassis, only one of the 2 onboard Ethernet ports on each M420 will be active. For example, if you use a pair of Dell PowerConnect M8024-k modules in Fabric A, each M420 blade server will have 1 x 10GbE port active. In contrast, if you use a pair of Dell PowerConnect M6348 Ethernet Modules, each server will have both 10Gb ports available for use (as 1Gb NICs).
The PowerEdge M420 comes with a Dual Port 10Gb Broadcom 57810s onboard NIC capable of running at 1GbE, or 10GbE. Features of the Broadcom 57810s include high performance hardware offload for iSCSI and FCoE-ready storage protocols as well as NPAR, the ability to partition the 10GbE port into 4 partitions each with different personalities. Check out this previous blog post for details. Full details on the Broadcom 57810s can be found at http://www.dell.com/us/business/p/broadcom-57810s-dual-port/pd.
Fabrics B and C
Fortunately, Fabrics B and C are pretty straight forward – sort of. Since the PowerEdge M420 blade server has 1 mezzanine card, it does not have the need to connect to both Fabric B and C at the same time, so the engineers at Dell came up with a creative way to give each M420 connectivity to the I/O modules. If you read my decoder key above,already you’ve probably already figured out the secret, but in case you did not, here’s a view of how it works:
If you had Ethernet modules in Fabric B and Fibre modules in Fabric C, only 2 of the 4 M420 blade servers in a sleeve would need to have a fibre mezzanine card (the other two would need an Ethernet mezzanine card.) In a chassis full of M420s, 16 could have additional Ethernet ports in the mezzanine card slot, and 16 could have Fibre ports.
Here are the current mezzanine card options for the PowerEdge M420:
- Intel i350 Quad Port 1Gb
- Intel® X520-x/k 10Gb Dual Port
- Broadcom 57810-k Dual port 10Gb KR CNA
- Broadcom 5719 Quad port 1GBE
- Brocade BR1741M-k Dual Port 10Gbps CNA
- Qlogic QME8262-k Dual port 10Gb KR CNA
- Emulex LPE1205-M 8Gbps Fibre Channel
- Qlogic QME2572 8Gbps Fibre Channel
You can find out more about these mezzanine cards, as well as the I/O Modules available for the Dell PowerEdge M1000e blade chassis at http://www.dell.com/us/enterprise/p/poweredge-mseries-blade-interconnects.
Dell PowerEdge M420 – By the Numbers
When you look at the numbers of what you can do with the Dell PowerEdge M420, it’s definitely suited for environments needing a lot of CPUs:
One thing to note – I’ve listed that the maximum memory on the M420 is 96GB. This is using 16GB DIMMs across the 6 available DIMM slots. When the 32GB DIMM becomes available, the maximum memory per M420 will increase to 192GB (which amount to 6TB per M1000e and 24TB per 42u rack.)
Finally, I’ll leave you with some places to find additional information about the PowerEdge M420.
- PowerEdge M420 Home Page – http://dell.to/PowerEdgeM420
- Dell PowerEdge M420 Systems Owner’s Manual (PDF) – View
- Dell PowerEdge M420 QR Tag videos – http://www.youtube.com/playlist?list=PL46414AA1BAA82B1A
Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com. He has over 15 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Specialist covering the Global 500 East market.