Back when I built my storage lab, I had to build separate physical servers (WIN1, WIN2, WIN3 and MGMT) each with their own fibre channel cards and I/O paths.  Today, VMware has VMDirectPath I/O, which allows you to take multiple I/O resources within a server and tie them to various guests.  What this means is that you can put say 3 guests into a physical ESXi server, 3 four-port FC cards in the server, and then map specific FC ports to specific guests, allowing you to virtualize your FC servers in a storage lab.

I looked at doing this myself when VMDirectPath I/O came out.  However, VMDirectPath I/O will not work on just any server.  You need specific hardware requirements, mainly the CPU has to support it (VT-d) and the motherboard chipset.  The obstacle this presents is that you need modern server hardware, which means its going to take newer FC cards (PCI-e, etc), and so you will pay more for your FC cards as well.  The advantage is you will have one single server that can do all your FC initiator needs, saving you space, power and cooling.

When I built my lab, I used cheap HP DL360G2/G3′s, and cheap FC cards that were about $20/each on eBay.  However, some of you may have modern servers in  your labs (now of days I do too), and if you have VMDirectPath I/O, then virtualizing your initiator servers is rather low hanging fruit.

One person has done just this, Sunny LiYu Zhang.  Not only did he virtualize the servers in the lab, but he virtualized the storage as well!

First let’s look at the servers, here is what Sunny is using in his single physical server which is virtualizing multiple servers using fibre channel.

Mainboard: Supermicro X8DA6
CPU: Intel Xeon E5606 * 1
Memory: 16GB DDR3 RAM with ECC
Storage: 500G SATA HDD
Fibre Channel Cards: Qlogic QLA2344 * 2 (Win1,Win2)
Qlogic QLA2532 * 1 (Win3)

The software being used is ESXi stand alone without vCenter.

For the storage side of things, most people assembling a lab will use four storage arrays, usually JBOD’s.  Some who have read my article on partitioning a JBOD can get by with two storage arrays.  The reality is, any arrays will typically do so long as they present a FC loop either public or private.  With VMDirectPath I/O you can get rid of all storage arrays.  You can basically do similar as to what is described above for virtualizing the servers.  The challenge is, a server with a FC card in it does not make a storage array.  That is your normal O/S running Linux, Windows, etc. does not present a loop of hard drives with WWN’s out its FC port as FC targets.  You need special software to do this.  In fact, the software even makes it better, as you do not need 24 hard drives each with a WWN, the software can virtualize the hard disks and WWN’s, so its essentially using files or file space and presenting them as hard drives with WWN’s.  This is a huge plus, as typically in a CCIE storage lab you build for yourself you will have 4 JBOD’s each with 6 drives, which means 24 drives total.

Building a virtual storage platform with VMDirectPath I/O will save you lots of space, power and cooling, it seems like a no brainer and is even more compelling then virtualizing the servers.  The issue is, the software to do this magic is not cheap.  One such piece of software, the software Sunny chose was SanBlaze from www.sanblaze.com.  This is an amazing piece of software for doing FC Target Emulation.  However the software is not cheap. It’s possible you may have it where you work or you may be able to get ahold of a demo license as Sunny did, which will eventually expire, but at least you can try it out to see if it suits your needs.

There is a SourceForge project which looks promising that may be able to offer SCSI Target Emulation as well called SCST.  A google search of “SCSI Target Emulation” will turn up multiple companies and products, almost all of which cost money.  Other companies that may offer such software are www.open-e.com or www.datacore.com.

 

 

 

 

 

 

Sunny ran his SanBlaze on a Cisco C210 M2 server with 2 LSI7404EP-LC fibre channel cards.  Obviously you will need to check the product requirements for whatever Target Emulation software you choose to make sure the fibre channel cards you choose are compatible.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

So in closing this has allowed Sunny to use just two servers, one to handle the SCSI initiators and one to handle the SCSI targets.  The secret sauce is a combination of VMDirectPath I/O and SCSI Target Emulation software.  If you are successfully using any VMDirectPath I/O in your lab, please chime in.  I am especially interested in hearing any success stories using open source SCSI Target Emulation such as SCST.

Thanks to Sunny Liyu Zhang for letting me talk a bit about his lab.  Sunny is a Customer Support Engineer with Cisco Systems in Beijing, China.  He should be taking his lab attempt in the next few months and I wish him the best of luck!