VMware vSphere 4 Lab – The work begins! – Part 1

One of the many things I am pursuing is VMware VCP-4 certification. I have been reading Scott Lowe’s excellent book Mastering VMware vSphere 4, VMware documentation, as well as I am enrolled in vSphere 4: FastTrack. The key thing that has made me successful in previous certifications and knowledge is lots of practice, practice, practice!  So I plan to build a VMware lab and work on all the different features and capabilities.

There are many people who have built quite impressive labs.  One of the ones I am most excited about is PacketSlaves, who has built a lab he calls ThunderChicken.  What I like about this lab, is he has taken some of the ideas of Phillip Jaenke’s BabyDragon and Simon Gallagher’s vTardis and improved slightly upon them. My lab is not yet solidified, mainly because it greatly exceeds my budget so I am trying to figure out where to tweak things.  It’s looking to be something like this:

2 Supermicro X8SIL-F motherboard
2 Intel Xeon X3450 Retail (2.66GHz, 4 Cores, 8 Threads)
8 Hynix 4gb DDR3 1066 ECC Registered memory
2 WD SiliconEdge Blue 128gb SSD
2 WD VelociRaptor 300gb 10,000rpm HD
2 Lian-Li V352B MicroATX case
2 Seasonic X Series 400W power supply
4 GELID Solutions FN-SX12-10 120mm Silent Case Fan
2 GELID Solutions FN-SX08-16 80mm Silent Case Fan
5 WD 2TB Caviar Black WD2001FASS
1 Synology DS1010+
2 VMware ESXi 4.1

I plan to install ESXi on USB thumb drives and install those on the internal USB ports on the SuperMicro X8SIL-F’s.  I am very excited about the X8SIL-F motherboard and its IPMI capabilities.

I have not decided if I should add 4-port 1GB network cards to these boxes or if it makes sense to just use 802.1q tags with the dual 1GB onboard ports.  Another thought would be to put in Qlogic fibre HBA’s, which more than likely I will add at some point.  The X8SIL-F only has 32bit PCI and then PCIe.  So to get something decent I have to use PCIe and there really aren’t many “cheap” dual port PCIe HBA’s that I know of.  So this is something else I plan to research.  At least I already have a huge Fibre Channel infrastructure to connect to.

The most expensive part of the above is the storage.  The NAS/iSCSI specifically.  I have struggled with this and continue to struggle to come up with something that is affordable and will offer good performance.  I have an extensive storage lab, I could just use an MDS as an iSCSI/iSLB virtual target and backend it to my JBOD’s.  But for this I really want something with RAID, and my JBOD’s don’t have that.  I started my search looking at various NAS’s and performance specs over at SmallNetBuilder NAS Charts.  A summary of the iSCSI write speeds is below:

NAS iSCSI write speeds

Most of the devices above that perform very well, are not surprisingly expensive and the ones at the bottom are a lot cheaper.  A nice feature of SmallNetBuilder’s NAS Charts is it lets you view the price/performance chart as well.  One of my main must haves is that the storage is on the VMware HCL.  Many of these are in fact on the HCL.  I did a lot of research and was really attracted to the price of the Iomega (EMC) ix2-200d and ix4-200d models.  What I don’t like is they have pretty poor performance.  I looked hard at QNAP, Synology, Thecus and Cisco.  Surprisingly Cisco has very good performance specifications.  I considered the NSS 326 quite closely as from a performance perspective and price it’s actually quite good.  Best of breed however for what I am looking for seems to be the Synology DS1010+.

Synology DS1010+

The DS1010+ is definitely a nice box.  It has 5 drive bays, so you can do RAID5 and have a hot spare and still get a decent amount of size and spindles out of it.  I plan to populate it with 2TB drives, which would give me 6TB of usable space.  All reports on newegg and the usual suspect sites say nothing but great things about this box.  Obviously any storage device is only going to be as good as the drives inside of it, so looking at such a decent box has got me looking at decent drives.  The “cadillac” drives seem to be the Western Digital WD2003FYYS but are quite pricey.  Performance tests of 2TB drives over at Toms Hardware show that these drives indeed performed well.  They are enterprise grade SATA drives.

2TB Hard Drive Performance

Another drive that performed quite well was the Western Digital Caviar Black.  This is an older drive, and I believe it uses 4 platters instead of the 2 that I believe the WD2003FYYS uses.  But performance wise its pretty impressive.  Best of all, these can be had for about $100 cheaper (new).  They aren’t offered everywhere as they are no longer shipping, but there are still plenty out there to be had.

In general I have not made a firm decision on the disk storage yet.  But Synology DS1010+ with WD Caviar Black drives is what I am leaning toward.  Part of me just wants to buy a Iomega IX2-200d or IX4-200d and call it a day.  After all it is on the HCL (well, EMC owns Iomega so go figure) so there shouldn’t be any problems in anything working, its just that I plan to use this lab for a lot of different studying and I want it to be economical yet offer decent performance.

The main focus of this lab will be VMware VCP study.  However I also plan to install the EMC Celerra uber-VSA, NetApp Ontap 8 Simulator, EMC Navisphere Simulator, Microsoft Domain Controllers, Cisco ACS, and numerous other tools to study for other certifications such as those from EMC, NetApp, SNIA and HDS.

I would say I am pretty solid in locking in my purchases for most parts of the lab.  The 300GB VelociRaptors I am buying refurbished and so they don’t add a whole lot to the equation.  The SSD’s are nice because, as packetslave suggests in his setup, you can share these out via the VSA’s, which makes working with them very fast!

I would be very interested in hearing anyone else’s comments on building a solid VCP-4 lab.  Key is value, I would gladly pay a little more to get a lot more.  I don’t want to go really cheap but I don’t need the best of the best either.  Hopefully I can have parts of this inbound as early as this week, get it all in within 2 weeks and have it built maybe the first week of January.

This entry was posted in VMware and tagged , . Bookmark the permalink.

3 Responses to VMware vSphere 4 Lab – The work begins! – Part 1

  1. Pingback: Tweets that mention VMware vSphere 4 Lab – The work begins! | Maximum Entropy -- Topsy.com

Leave a Reply