Key Points / Pitfalls
- By default FCC is set to 4 which effects only low and medium traffic
Key Points / Pitfalls
The Cisco MDS has a very sound architecture. FC traffic has very little to worry about while traveling through the MDS fabric. There is no congestion or blocking within the fabric. The congestion that happens on an MDS happens on the actual interfaces.
Although server architectures are getting faster and faster, the vast majority of deployed HBA ports cannot saturate a 2GB FC port. Your typical PCI bus is good for 20-40MBps. A 2GB FC port runs at 200MBps. Obviously there are many servers out there that are not your typical PCI HBA (PCIe, etc) and can benefit, but my point is, that the most likely bottleneck is the server HBA not the switch fabric. If you think about this, it makes sense. This is why its common to see fan-in ratios of 6:1 to 12:1. Having 5 initiators capable of sustaining 40MBps each, its just fine if you have a 2GB or 4GB target port.
This being said, the MDS switches do have some interesting QoS abilities. I come from a Routing and Switching background. QoS in Routing and Switching is very elaborate. Lots of different knobs and switches to work, with many different hardware architectures supporting any number of input and output queues for both ingress and egress ports. Not so on the Cisco MDS. Comparatively the Cisco MDS is quite simple. Each ingress interface has 1024 Virtual Output Queues (VOQ’s). If your talking about oversubscribed ports groups such as those in the DS-X9032, then the VOQ’s are shared between the four ports. There are no egress queues.
The traffic coming into an interface (or 4-port interface group) has equal treatment from the MDS’s Central Arbiter. There is nothing you can do to influence this. The Central Arbiter is going to apply an equal service to each interface. Now when it comes to an individual interface, there is a difference in treatment as it pertains to that interface. Control traffic is given priority and placed in a Priority Queue. This can be turned off so that its placed into the data queue, but you would likely never want to do this. With QoS enabled (requires Enterprise license), you can give priority to one type of traffic over another based on a variety of classification parameters such as source/destination FCID, source/destination WWN, etc.
The QoS however is only effective if there is congestion between traffic coming in the same interface and leaving the same interface! This can be confusing but is important. If traffic comes in the same interfaces, but goes out two different interfaces, the MDS QoS can do nothing (gen-1). The only time you are going to have traffic coming in the same interface and going out the same interface is 1) an initiator or several initiators (NPIV) coming into an interface and talking to disks on the same destination interface, 2) traffic from a switch being trunked into another switch, destined for the same output interface. Obviously you can also have traffic in the reverse direction, such as a single storage port with traffic going to multiple initiators on a far end switch over a common trunk.
So it’s important to understand your topology to understand whether or not you can expect real benefits from MDS QoS. Of much more concern is proper fan-in/fan-out ratios, making sure you don’t have bus contention on the server (don’t put a FC HBA on the same bus as the servers network interface for example), proper port sizing, use of MPIO, etc. Because there is very little possibility of contention within the fabric. The MDS’s generation-2 modules actually allow the QoS to work so long as the input interface is common, even if the egress interface is different. Still this is limited, as there is more likely to be contention elsewhere.
The QoS in MDS allows you to split up the VOQ’s into Low, Medium and High classes. Then you can use the MDS’s DWRR (Deficit Weighted Round Robin) to decide which queues get how much treatment. By default they are set to 50:30:20 (High, Medium, Low), so that the High priority queue gets 2.5 times the treatment of the low (default) queue. The PQ is only for control traffic and always gets serviced if there is traffic in it. The MDS QoS only kicks in when there is congestion. It is good to deal with latency issues, as its giving more time servicing to a particular class within the VOQ, its not really addressing the bandwidth issues. For that the MDS has FCC which I talk about next. You have to classify traffic to have it divided into different priorities and you can do this at the VSAN or zone level. Otherwise all traffic goes into the Low/Default class whether or not QoS is enabled.
Another feature of Cisco MDS, is FCC (Fibre Channel Congestion Protocol). This has usefulness with and without QoS and does not require any special license (it is a standard part of SAN-OS/NX-OS). The issue that FCC addresses is head of line blocking. FC Class-3 relies on BB_credits to deal with congestion. This is a very simple mechanism and in many cases inadequate. It is done hop-by-hop and requires that the each link send a R_RDY back toward the other side to indicate it can receive more traffic. The problem is, if one interface congests, then the R_RDY can not make it back to source port, meaning the source port is now congested as well. This cascades through the fabric basically putting lots of ports on lockdown. What FCC accomplishes, is it sees where there is congestion, and sends an Edge Quench frame back to the originating switch, which will rate limit the source port. Yes, the Edge Quench has to fight the congestion to make it back to the source switch, but once that happens, the problem has some immediate relief. The Edge Quench frames can transit non-MDS switches, but the source and destination switches have to be Cisco MDS.