It’s been a while since I’ve posted what rumours I’m hearing, so I thought I’d dig around and see what I can find out. NOTE: this is purely speculation, I have no definitive information from any vendor about any of this information so this may be false info. Read at your own risk.
Rumour #1 – GPU’s on a Blade Server
I’m hearing more and more discussion around “GPU’s” being used on a blade server. Now, I have to admit, when I hear the term, “GPU“, I’m think of Graphical Processing Unit – or the type of processor that runs a high-end graphics card. So, when I hear rumours that there might be blade servers coming out that can handle GPUs, I have to wonder WHY?
Wikipedia defines a GPU as “A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a specialized processor that offloads 3D or 2D graphics rendering from the microprocessor. It is used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for a range of complex algorithms. “
NVIDIA, the top maker of GPUs, also points out on their website, “The model for GPU computing is to use a CPU and GPU together in a heterogeneous computing model. The sequential part of the application runs on the CPU and the computationally-intensive part runs on the GPU. From the user’s perspective, the application just runs faster because it is using the high-performance of the GPU to boost performance. ”
(For a cool Mythbusters video on GPU vs CPU, check out Cliff’s IBM Blog.)
So if a blade vendor decided to put together the ability to run normal AMD or Intel CPUs in tandem with GPU’s from NVIDIA, let’s say by using graphics cards in PCI-x expansion slots, they would have a blade server ideal for running any application that would benefit from high performance computing. This seems do-able today since both HP and IBM offer PCI-x Expansion blades, however the rumour I’m hearing is that there is a blade server coming out that will be specifically designed for running GPUs. Interesting concept. I’m anxious to see how it will be received once it’s announced…
Rumour #2 – Another Blade Server Dedicated for Memory
My second rumour is less exciting than the first – is that yet another blade vendor is about to announce a blade server designed for maximum memory density. If you’ll recall, IBM has the HS22v blade and HP has the BL490c G6 blade server – both of which are designed for 18 memory DIMMs and internal drives. So – that leaves either Cisco or Dell to be next on this rumour. Since Cisco has the B250 blade server that can hold 48 DIMMs, I’m willing to believe they wouldn’t need to invest into designing a half-wide blade that can hold 18 DIMMs, therefore the only remaining option is Dell. What would Dell gain from introducing a blade server with high memory density? For one, it would give them an option to compete with IBM and HP in the “2 CPU, 18 Memory DIMM” environment. Another reason is that it would help expand Dell’s blade portfolio. If you examine what Dell’s current blade server offerings are today, you see they can’t compete with any requirement for large memory environments without moving to a full-height blade.
That’s all I have. Let me know if you hear of any other rumours.
Pingback: Kevin Houston
The challenge of putting conventional GPUsisn’t trivial. If you look at the current crop of workstation graphics/HPC accelerator GPUs from NVidia there are several problems:
1. They are huge, requiring a full length PCIe slot
2. They draw 250/300 watts/GPU
3. They need dedicated power supply connections
To fit one inside a blade (as opposed to putting it in a separate PCIe/PCIx expansion chassis) is going to take some serious engineering work.
interesting rumours. in order to effectively run GPGPUs you need a couple of things that no blade vendor can really support right now.
1) you need a high bandwidth PCIe connection to the host like full x16 gen2 PCIe. PCI-X and gen1 x4 PCIe slots are not enough to handle the bandwidth of these devices
2) you need the capability to support very high power cards. the GPGPUs can be up to 250+ Watts each. so you'd need supplimental power connections and much greater power delivery capability than IBM/HP's “sidecar” PCI expansion blades support today.
maybe someone will offer these things in a blade?
on the high memory blade, would also be nice if someone figured out how to put 18 dimms in a high density blade without requiring expensive, non hot plug SSDs
Great comments, Nik. Mike, from #dell mentioned the same thing shortly after you. It will be interesting to see how the blade vendor handles the power requirement. Thanks for reading!
Great comments, Mike. It will be interesting to see how #GPUs on a blade server will be handled, if indeed the rumour is true.
Pingback: Kevin Houston
Pingback: leslie gillette
IBM just announced their iDataplex GPU solution this week. (2 full power Fermi GPUs in a 2 socket, half-depth Nehalem server). 42 dual westmere, dual GPU+Infiniband/10GbE servers in a standard 42U rack footprint. No blade will be able to match that, from any vendor.
yes #ibm announced an iDataplex GPU solution but I don't focus on non-blade technology on this blog. HOWEVER it is important to point out that iDataplex is a “solution” that must be bought and ordered from IBM. It's not a pay-as-you grow model and you can't buy a couple of servers and add more in the future like you can with blade servers. Great point though. Thanks for reading!