Complete CCIE Storage Lab using VMDirectPath I/O

Back when I built my storage lab, I had to build separate physical servers (WIN1, WIN2, WIN3 and MGMT) each with their own fibre channel cards and I/O paths.  Today, VMware has VMDirectPath I/O, which allows you to take multiple I/O resources within a server and tie them to various guests.  What this means is that you can put say 3 guests into a physical ESXi server, 3 four-port FC cards in the server, and then map specific FC ports to specific guests, allowing you to virtualize your FC servers in a storage lab.

I looked at doing this myself when VMDirectPath I/O came out.  However, VMDirectPath I/O will not work on just any server.  You need specific hardware requirements, mainly the CPU has to support it (VT-d) and the motherboard chipset.  The obstacle this presents is that you need modern server hardware, which means its going to take newer FC cards (PCI-e, etc), and so you will pay more for your FC cards as well.  The advantage is you will have one single server that can do all your FC initiator needs, saving you space, power and cooling.

When I built my lab, I used cheap HP DL360G2/G3’s, and cheap FC cards that were about $20/each on eBay.  However, some of you may have modern servers in  your labs (now of days I do too), and if you have VMDirectPath I/O, then virtualizing your initiator servers is rather low hanging fruit.

One person has done just this, Sunny LiYu Zhang.  Not only did he virtualize the servers in the lab, but he virtualized the storage as well!

First let’s look at the servers, here is what Sunny is using in his single physical server which is virtualizing multiple servers using fibre channel.

Mainboard: Supermicro X8DA6
CPU: Intel Xeon E5606 * 1
Memory: 16GB DDR3 RAM with ECC
Storage: 500G SATA HDD
Fibre Channel Cards: Qlogic QLA2344 * 2 (Win1,Win2)
Qlogic QLA2532 * 1 (Win3)

The software being used is ESXi stand alone without vCenter.

For the storage side of things, most people assembling a lab will use four storage arrays, usually JBOD’s.  Some who have read my article on partitioning a JBOD can get by with two storage arrays.  The reality is, any arrays will typically do so long as they present a FC loop either public or private.  With VMDirectPath I/O you can get rid of all storage arrays.  You can basically do similar as to what is described above for virtualizing the servers.  The challenge is, a server with a FC card in it does not make a storage array.  That is your normal O/S running Linux, Windows, etc. does not present a loop of hard drives with WWN’s out its FC port as FC targets.  You need special software to do this.  In fact, the software even makes it better, as you do not need 24 hard drives each with a WWN, the software can virtualize the hard disks and WWN’s, so its essentially using files or file space and presenting them as hard drives with WWN’s.  This is a huge plus, as typically in a CCIE storage lab you build for yourself you will have 4 JBOD’s each with 6 drives, which means 24 drives total.

Building a virtual storage platform with VMDirectPath I/O will save you lots of space, power and cooling, it seems like a no brainer and is even more compelling then virtualizing the servers.  The issue is, the software to do this magic is not cheap.  One such piece of software, the software Sunny chose was SanBlaze from www.sanblaze.com.  This is an amazing piece of software for doing FC Target Emulation.  However the software is not cheap. It’s possible you may have it where you work or you may be able to get ahold of a demo license as Sunny did, which will eventually expire, but at least you can try it out to see if it suits your needs.

There is a SourceForge project which looks promising that may be able to offer SCSI Target Emulation as well called SCST.  A google search of “SCSI Target Emulation” will turn up multiple companies and products, almost all of which cost money.  Other companies that may offer such software are www.open-e.com or www.datacore.com.

 

 

 

 

 

 

Sunny ran his SanBlaze on a Cisco C210 M2 server with 2 LSI7404EP-LC fibre channel cards.  Obviously you will need to check the product requirements for whatever Target Emulation software you choose to make sure the fibre channel cards you choose are compatible.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

So in closing this has allowed Sunny to use just two servers, one to handle the SCSI initiators and one to handle the SCSI targets.  The secret sauce is a combination of VMDirectPath I/O and SCSI Target Emulation software.  If you are successfully using any VMDirectPath I/O in your lab, please chime in.  I am especially interested in hearing any success stories using open source SCSI Target Emulation such as SCST.

Thanks to Sunny Liyu Zhang for letting me talk a bit about his lab.  Sunny is a Customer Support Engineer with Cisco Systems in Beijing, China.  He should be taking his lab attempt in the next few months and I wish him the best of luck!

This entry was posted in CCIE, CCIE Storage, Storage and tagged , , , . Bookmark the permalink.

10 Responses to Complete CCIE Storage Lab using VMDirectPath I/O

  1. Pingback: Complete CCIE Storage Lab using VMDirectPath I/O – Maximum … | Multiple Server Management

  2. Sunny says:

    VMDirectPath I/O has a little bit limitation and need more consideration on PCI-E resources on mainboard.
    For example, Qlogic QLA2344 is a PCI-X bus and need a special ASIC translate PCI-E to PCI-X. the ASIC has 2 channels.
    If you enable VMDirectPath I/O on this PCI-E channel, 2nd PCI-X can’t be used or totally within VMDirectPath I/O.

  3. Sunny says:

    Actually, Mainboard: Supermicro X8DA6 isn’t the best choose for the server.
    because X8DA6 designed for workstation not server. but Qlogic QLA2344 with 4 FC port is cheaper than other 4 port FC HBA card. X8DA6 has 3 PCI-X port, VMdirect I/O can be use 2 of them.

  4. shahid says:

    I have 2x 2port qlogic hba in esxi host. VMdirectPath I/0 is enabled , one card works fine but when I try to boot second vm, it says that hba is already in use. I see also the green light is solid on the second card.
    Any idea how to fix it and how many PCI cards I can configure in VM?
    thanks
    shahid

  5. shahid says:

    ESXi 4.1 , with Qlogic 2Gig card. one is 4 port and other is 2 port. It works fine for one vm WIN1A but or second WIN2A, when I try to assign PCI card in vm setting , it accept it but when i power the machine it gives error , PCI is already in use.
    Any idea.
    shahid

  6. shahid says:

    QLA2344 AND QLA2322.

    • brian says:

      Please see Sunny’s comments earlier on this thread:

      “VMDirectPath I/O has a little bit limitation and need more consideration on PCI-E resources on mainboard.
      For example, Qlogic QLA2344 is a PCI-X bus and need a special ASIC translate PCI-E to PCI-X. the ASIC has 2 channels.
      If you enable VMDirectPath I/O on this PCI-E channel, 2nd PCI-X can’t be used or totally within VMDirectPath I/O.”

      So what he is saying is basically what you are experiencing. You should note that I don’t believe the cards you are using are on the VMware HCL, so you may have issues in getting them to work 100%.

  7. shahid says:

    I have 2xesxi host running 4.1u1. One host has 4 port QLOGIC HBA. The second PCI-X Slot has 2 port HBA qlogic also. But, when I try to start vm WIN1B, It says the address is being already used. However, esxi host see all 6 hba active in passthrough mode.
    any thoughts??
    cheer

Leave a Reply