I’ve been getting some questions via email and I’ve seen some questions being asked in my LinkedIn group, “Blade Server Technologies” so I thought I’d take a few minutes in today’s post to answer these questions, as well as get your feedback. Feel free to post your thoughts on these questions in the comments below.
“Hi All, this questions comes from a complete blade-server-dummy, so please don’t mind if my questions are just stupid. Our customer has some space issues with their server rooms and want to swap their racks and servers with blade servers as much as they can. Naturally they also ask if our apps can run on blade servers. Do blade servers can handle high transaction capasities? Are they good for non-virtual systems? “
Okay, this question was loaded with more than one question, so first I’ll focus on the primary question – “can blade servers run apps that aren’t virtualized”? In a short answer, YES, absolutely with some caveats. Blade servers were first designed in the early 2000’s before virtualization was hot in the x86 space. Blade servers have the same technology as in your rack servers – CPUs, memory, NICs, drives, etc. The biggest difference is that blade servers share their environment. They share the power, the cooling, the management, and the I/O connectivity. In some cases, blade servers are very similar to 1u rack servers, so what you can do with a 1U server, you may be able to do with a blade servers. That being said, contrary to some vendors’ marketing, blade servers are not meant for everyworkload. If you have an application that requires more than 2 local drives, or more than 3 PCIe expansion slots, then blade servers may not be suitable. YES, there are ways to provide a blade server more than 2 local drives (like using IBM’s BladeCenter S chassis with local drives) or provide blade servers with more than 3 PCIe expansion slots (like the Dell PowerEdge M910) but these are exceptions to the rule.
“Who rules the blade server world? IBM? Dell? HP? Cisco?“
This is a good question that is difficult to answer. When you look at the market share from IDC, you see that HP leads the pack with IBM following second and Dell being grouped in with the others. While Cisco is creating disruption in the market, they are still too new to be added to any analyst charts. My answer, based on the IDC data, would have to be HP.
I am a fan of your website and I have been following up some of your reviews on blades servers and it looks like you have been around blade severs for awhile. I was wondering if you can help me in making a tough decision . We have been loyal to Dell for 10 years (rack mount servers). We have been able to get the servers we have from Dell … Today we run vSphere 4 Enterprise plus on 6 R900 attached to a CX4-240 and a Celerra NS40. Our telecom team bought a new Cisco UCS system because they have to upgrade their old HP/Cisco Server running our telephony system and Cisco did not give them much choice. We bought two chassis and 4 blades, the rest is empty. My question is… should we start moving towards Cisco UCS or continue doing business with Dell and get into there Blade servers? I am concerned, because in the past Cisco has given us good prices and then become one of our most expensive solution in our datacenter. Any advice will be highly appreciate it!!
This question came to me via email. I will have to admit, I have been around blades since around 2005, but I’m not an expert by any means. This web site is designed to help simplify the complexity of blade servers, and to be a single point of information. I try to stay as “neutral” as possible with my posts and comments, so when I get questions like this, I hesitate to provide a definitive answer – especially when it comes to talking about Cisco UCS. Cisco has probably the most devoted followers / employees of any manufacturer I have ever dealt with and I KNOW that anything negative that is said about Cisco will cause a mass amount of comments to come flowing to my inbox by a certain couple of Cisco fanatics (wipe the Kool-Aid off your mouth.) With that being said, my thoughts on Cisco UCS is simple – it is a great design for an environment needing more than 8 VMware vSphere hosts. In my opinion, anything less and Cisco UCS is over kill. My reasoning for this is because the architecture of Cisco’s UCS is built in a modular approach with each module (think of a Lego block) hosting up to 8 blade servers per chassis, with each blade chassis connecting to a single management device (the UCS 6100 Series Fabric Interconnect.) If you want more detail on the Cisco UCS offering, check out this post from a few weeks ago.
I stumbled upon your blog whilst researching an issue with the setup of I/O module bays 3 and 4. In the past I’ve only ever used bays 1 and 2 on all of my BladeCenters. Now we’re starting to virtualize and the need for more than two NICs per blade server has finally forced our hand. I’ve installed the IBM/Cisco 3012 ethernet switch modules and everything looks like it should work, but I can’t ping the switch address. I have 3012s in other BladeCenters setup the same way, the only difference is that they are all in bays 1 and 2. Is there some trick in IBM Bladecenter world to setting up modules in bays 3 and 4?
While I would love to be a one-stop shop for technical issues, my recommendation would be to take a look at the IBM Redbook titled, IBM BladeCenter Products and Technology. You can find this under my Helpful Links tab above. One bit of advice that I’ve run into in the past with my own IBM BladeCenter chassis – under Advanced Management of the I/O Module in the IBM Advanced Management Module (AMM), make sure
That’s it for this blog post. If you have questions about blade servers, e-mail me at BladesMadeSimple@gmail.com and I’ll see if I can help you out.