Standard vSwitch - часть 3

В этой части мы поговорим об особенностях балансировки нагрузки, которые доступны в vSwitch.

В vSwitch, политики балансировки нагрузки описывают различные методы, которые будут использоваться для распределения сетевого трафика от виртуальных машин, подключенных к vSwitch и его подчиненные группы портов через физические сетевые адаптеры, связанных с vSwitch. Есть несколько вариантов, доступных для балансировки нагрузки:

  • Load Balancing Policies

    • vSwitch Port Based (default)

    • MAC Address Based

    • IP Hash Based

    • Explicit Failover Order

Все политики балансировки нагрузки влияют только трафик, исходящий из вашего ESXi хоста. Мы не можем контролировать трафик, который направляется к нам из физического коммутатора. Кроме того, все эти методы применяются для соединения между uplink портами vSwitch (т. е. физические сетевые адаптеры связаны с виртуальным коммутатором) и физическим коммутатором. Кроме того, эти политики балансировки нагрузки не влияют на связь между виртуальном сетевым адаптером в виртуальной машине и vSwitch.

рис.1. Scope of Discussion

Чтобы проиллюстрировать концепции балансировки нагрузки, мы собираемся работать с конфигурации, показанной на рис.2. На этом рисунке, мы имеем одиин vSwitch, который подключен к двум pNICs. Мы также настроены две группы портов (PG_A и PG_B) на vSwitch. Для целей обсуждения, наш vSwitch имеет восемь настроенных портов (что невозможной в реальной жизни!).

Figure 2. Basic vSwitch Configuration (Load Balancing)

We’ll use Figure 2 as the backdrop for all our discussions going forward. On an editorial note – in all my examples, I am assuming that the load balancing approach is set at the vSwitch level and is not overridden at the Port Group. If anyone has a really good example of why I would want to override the load balancing approach at the Port Group, please leave me a comment!

In all load balancing scenarios, the affiliations that are made between a vNIC and a pNIC are persistent for the life of the vNIC or until a failover event occurs (which we’ll cover a little later). What this means is that when a vNIC gets mapped to a pNIC, all outbound traffic from that vNIC will traverse the same pNIC until something (i.e. vNIC power cycle, vNIC disconnect/connect, or a detected path failure) happens to change the mapping.

Note: Each pNIC can be associated with only one vSwitch. If you want a pNIC to be affiliated with more than one network, you will need to use 802.1Q VLAN Tagging and Port Groups!

Now, on to the load balancing approaches!

vSwitch Port Based Load Balancing

The first load balancing approach I want to discuss is vSwitch Port Based (I’ll refer to this as simply “Port Based” load balancing), which is the default option. In the interest of full disclosure, let me say that this is my favorite type of load balancing. I tend to use this except in situations which can truly benefit from IP Hash.

In Port Based load balancing, each port on the vSwitch is “hard wired” to a particular pNIC. When a vNIC is initially powered on, it will be dynamically connected to the “next available” vSwitch port. See Figure 3 for the first example.

рис.3. vSwitch Port-Based Load Balancing - Сценарий 1

In the example shown in Figure 3 the virtual machines are powered up in order from VM1 to VM5. You’ll notice that VM3 is connected to PG_B, yet it still winds up affiliated with vSwitch port #3 and pNIC #1. This is because, as you’ll recall from Part 1, a Port Group is merely a template for a vSwitch connection rather than an actual group of ports. So, what we wind up with after this initial power-up sequence is shown in Table 1:

Таблица 1. Сценарий 1 NIC Mappings

In Scenario 2, we’re building on the configuration presented in Scenario 1. In this second scenario, the following events have occurred since the end of Scenario 1:

  • VM2 was powered off

  • VM6 was powered on

  • VM2 was powered back on

The result is shown in Figure 4.

Рисунок 4. vSwitch Port-Based Load Balancing - Сценарий 2

Notice that vSwitch port #2 is now connected to VM6, yet it retains its association with pNIC2; whereas VM2 is now connected to vSwitch port #6, also on pNIC2. We wind up with the configuration represented in Table 2.

Таблица 2. Сценарий 2 NIC Mappings

Notice that in each scenario there is an approximately equal distribution of vNICs to pNICs. This distribution results in a rough balancing of vNICs across all available pNICs in the vSwitch. There is no attempt made to distribute vNICs based on utilization, so it is entirely possible that you could wind up with all of your “heavy use” VMs on one pNIC and all of your “light use” VMs on another one (or more) pNICs. Even though this possibility exists, I still prefer this load balancing approach – in most cases – over all the others. The reason for this is that none of the other algorithms (save one) offer any better distribution of traffic across the pNICs, yet they all require additional configuration (i.e. changes from the default). I’ll address the single exception case in a few minutes.

MAC Address Based Load Balancing

MAC Address Based load balancing, which I’ll call “MAC Based,” simply uses the least significant byte (LSB) of the source MAC address (the MAC address of the vNIC) modulo the number of active pNICs in the vSwitch to derive an index into the pNIC array. So, basically what this means in our scenario with two pNICs is this:

Assume the vNIC MAC address is 00:50:56:00:00:0B, therefore, the LSB is 0x0B or 11 decimal. To calculate the modulo, you divide (using integer division) the MAC LSB by the number of pNICs (thus 11 div 2) and take the remainder (1 in this case) as the modulo. The array of pNICs is zero-based, so (modulo 0) = pNIC1 and (modulo 1) = pNIC2.

If we look at a scenario where we have six VMs with sequential MAC addresses (at least the LSB is sequential), we wind up with a situation like the one shown in Figure 5.

Figure 5. MAC Address Based Load Balancing

Notice that I removed the vSwitch ports from this diagram. That’s because they really don’t come into consideration with MAC based load balancing. What we wind up with is VM to pNIC mapping as shown in Table 3. The MAC LSB column shows the least significant byte of the MAC address for the vNIC in each VM. The modulo value shows the remainder of (MAC LSB div (# pNICs)), and the pNIC column indicates to which pNIC the vNIC will be affiliated.

Table 3. MAC Based Mapping

As you can see, there is no real advantage to using this over vSwitch Port Based load balancing, in fact, you could potentially wind up with a worse distribution with MAC based load balancing. So…even though this is an option, I see no real justification for taking the extra steps to configure MAC based load balancing. This was the default load balancing approach used in ESX 2.x. I file this one in the “interesting but worthless” category.

IP Hash Based Load Balancing

IP Hash based load balancing (I’ll call it simply “IP Hash”) is the most complex load balancing algorithm available, it also has the potential to achieve the most effective load balancing of all the algorithms. The problems with this algorithm, from my perspective, are the technical complexity and the political complexity. We’ll discuss each as we go along.

In general, IP Hash works by creating an association with a pNIC based on an IP “conversation”. What constitutes a conversation, you ask? Well, a conversation is identified by creating a hash between the source and destination IP address in an IP packet. OK, so what’s the hash? It’s a simple hash (for speed) – basically (((LSB(SrcIP) xor LSB(DestIP)) mod (# pNICs)) which all boils down to: Take an exclusive OR of the Least Significant Byte (LSB) of the source and destination IP addresses and then compute the modulo over the number of pNICs.  It’s actually not that different than the calculation used in the MAC based approach.

When configuring IP Hash as your load balancing algorithm, you should make the configuration setting on the vSwitch itself and you should not override the load balancing algorithm at the Port Group level. In other words, ALL devices connected to a vSwitch configured with IP Hash load balancing must use IP Hash load balancing.

A technical requirement for using IP Hash is that your physical switch must support 802.3ad static link aggregation. Frequently, this means that you have to connect all the pNICs in the vSwitch to the same pSwitch. Some high-end switches support aggregated links across pSwitches, but many do not. Check with your switch vendor to find out. If you do have to terminate all pNICs into a single pSwitch, you have introduced a single point of failure into your architecture.

It is also important for you to know that the vSwitch does not support the use of dynamic link aggregation protocols (i.e. PaGP/LACP are not supported). Additionally, you’ll want to disable Spanning Tree protocol negotiation and enable portfast and trunkfast on the pSwitch ports.

All this brings up the political complexity associated with IP Hash – the virtualization administrator can’t make all the configuration changes alone. You have to involve the network support team, which in many organizations, isn’t worth any possible performance improvement!

So, let’s assume that you have one VM (one single IP address) copying files between two file servers (two unique IP addresses) See Table 4:

Table 4. IP Hash Based Mapping, Scenario 1

As you can see, we now have one VM taking advantage of two pNICs. There are obvious performance advantages to this approach! But, what happens if the two file servers have IP addresses that compute out to the same hash value, as shown in Table 5?

Table 5. IP Hash Based Mapping, Scenario 2

n this example, both conversations map to the same pNIC, which kind of defeats the purpose for implementing IP Hash in the first place! What it all boils down to is this:

To derive maximum value from the IP Hash load balancing algorithm, you need to have a source with a wide variety of destinations.

Where most people want to use IP Hash is for supporting IP Storage on ESX/i (remember, that’s my notation for either ESX or ESXi). Since there is a single source IP address (the IP address of the vmkernel), you need to have multiple destination IP addresses to be able to take advantage of the load balancing features of IP Hash. In many IP Storage configurations, this is not the case. NFS is the primary culprit – it is very common to have a single NFS server sharing out multiple mount points, which all share the NFS server’s IP address. Many iSCSI environments suffer from the same problem – all the iSCSI LUNs frequently live behind the same iSCSI Target, thus a single IP address.

The lesson to this story is really quite simple:

If you want to use IP Hash to increase the effective bandwidth between your ESX/i host and your IP Storage subsystem, you must configure multiple IP addresses on your IP Storage. For NFS, this means either multiple NFS servers or a single server with multiple aliases, and for iSCSI, it means that you’ll want to configure multiple targets with a variety of IP addresses.

So, as you can see, the IP Hash load balancing algorithm offers the best (under the right set of circumstances) and the worst of all options. It offers the best load balancing and performance under the following circumstances:

  • IP Hash load balancing configured on vSwitch with multiple uplinks

  • Static 802.3ad configured on all relevant ports on the pSwitch(es)

  • Multiple IP conversations between the source and destinations with varying IP addresses

If you don’t meet ALL those requirements, IP Hash gains you nothing but complexity. IP Hash gains you the worst of all options because of the following:

  • Significantly increased technical complexity

  • Significantly increased political complexity

  • Potential introduction of a single point of failure

  • No performance gains if there is a single IP conversation

The long and short of it comes down to this – use IP Hash load balancing when you understand what you’re doing and you KNOW that it will provide you concrete advantages. This is not the load balancing algorithm for the new VI administrator, nor for an administrator who is not on good terms with their network support team. My recommendation for most environments is to start with vSwitch Port Based load balancing and monitor your environment. If you see that your network throughput is causing a problem and you can satisfy the conditions I set out above, then – and only then – implement IP Hash as your load balancing algorithm.

Explicit Failover Order Load Balancing

This is the load balancing algorithm for the control freak in the crowd. With the Explicit Failover Order load balancing algorithm in effect, you are essentially not load balancing at all! Explicit failover will utilize, for all traffic, the “highest order” uplink from the list of Active pNICs that passes the “I’m alive” test. What does the “highest order” mean? Well, it’s simply the pNIC that has been up the longest!

You manage the failover order by placing pNICs into the “Active Adapters,” “Standby Adapters,” and “Unused Adapters” section of the “Failover Order” configuration for the vSwitch or Port Group. pNICs listed in the “Active Adapters” section are considered when calculating the highest order pNIC. If all of the pNICs in the Active Adapters section fail the “I’m alive” test, then the pNICs listed in the “Standby Adapters” section are evaluated. Adapters listed in the “Unused Adapters” section are never considered for use with the Explicit Failover Order load balancing approach.

This is another policy that I file in the “interesting but worthless” category.

Load Balancing and 802.1Q VLAN Tagging

It’s important to note that, for all of the load balancing options I’ve discussed, you can still use 802.1Q VLAN Tagging. The thing you have to be careful of is to ensure that all ports configured in a load balanced team have the same VLAN trunking configuration on the pSwitch. Failure to configure all the pSwitch ports correctly can result in very difficult to troubleshoot traffic isolation problems – it’s a good way to go bald in a hurry!

Load Balancing Summary

To summarize on network load balancing options…even though there are four load balancing options available for your use, I recommend that you stick with one of two:

  • vSwitch Port Based Load Balancing: This is the default (and preferred) load balancing policy. With zero effort on the part of the virtualization administrator, you achieve load balancing that is – in most cases – good enough to meet the demands of the majority of virtual environments. This is where I recommend that you begin, especially if you are new to VMware technologies. Stand this configuration up in your environment and monitor to see if the network is a bottleneck. If it is, then look to IP Hash as a possible enhancement for your setup.

  • IP Hash Load Balancing: This is the most complex, and possibly, the most rewarding load balancing option available. If you’re comfortable working in your virtual infrastructure, if you understand the networking technologies involved, and if you have a good working relationship with your network administrator, IP Hash can yield significant performance benefits. The problem I have with this algorithm is that I see it implemented in far too many environments where network throughput is not a problem. People seem to think that a gigabit (or even a 10Gb) Ethernet connection just doesn’t have enough guts to handle 20, 30, or more virtual machines. I beg to differ! In most cases, you’ll find that a single GbE connection is more than capable of handling the load, so why not let it? The area where I do sometimes see a need for IP Hash is with IP based storage, but even here, it is frequently not needed.

Do yourself a favor – if you don’t need to use IP Hash, and especially if your environment isn’t setup to be able to take advantage of the benefits of IP Hash, KISS it and stay with vSwitch Port Based Load Balancing. You’ll be glad you did!

СОЦИАЛЬНЫЕ ГРУППЫ САЙТА

Яндекс.Метрика
Рейтинг@Mail.ru