VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server

If you checked my posts before and especially LAB scenarios maybe you noticed I didn’t tag VLANs and reason is I didn’t want to complicate things and I know that some people have problem with understanding how it all works in comparison with physical switches and all. This post is meant to help clear that and I hope it will help in better understanding of Hyper-V Virtual Switch.

Network virtualization provides multiple virtual network infrastructures run on the same physical network with or without overlapping IP addresses. Each virtual network infrastructure operates as if they are the only virtual network running on the shared network infrastructure. Hyper-v Network Virtualization also decouples physical network from virtual network.

VLAN technology has been around for quite some time and its primary purpose is to allow physical switch ports to be logically segregated, most commonly for security purposes but also sometimes to reduce the effects of excessive broadcast traffic and other potential congestion-causing problems. Refer to the following diagram:

On the left are two 16-port switches. Their individual ports are configured to be in the VLANs as indicated in the legend. As far as the devices plugged into those switches are concerned, they are configured as seen in the box on the right. Ports connected to a particular VLAN have no direct connectivity to ports in any other VLAN, even if they happen to be on the same physical switch, but they can communicate directly with any other port in the same VLAN, even if it happens to be on another switch – provided that any other physical switches are connected in a trunk.

Hyper-V’s virtual switches have the ability to emulate the above behavior, although it isn’t required. If you do nothing, all traffic traveling across a Hyper-V virtual switch will simply be untagged. If the physical switch that the physical network card you’re using to host your Hyper-V virtual switch doesn’t support 802.1q, then you won’t be able to use this feature. If the physical switch is 802.1q compliant, then you’ll need to set the physical port into trunk mode. For a Cisco IOS switch, this is done with the “switchport mode trunk” command; for other vendors, see the documentation.

You don’t need to do anything in Hyper-V to enable trunking; it’s on automatically. However, you’ll need to configure individual switch ports for the desired VLAN.

Hyper-v Virtual Network Type

  • Private Virtual Network Switch allows communication between virtual machines connected to the same virtual switch. Virtual Machines connected to this type of virtual switch cannot communicate with Hyper-V Parent Partition. You can create any number of Private virtual switches.
  • Internal Virtual Network Switch can be used to allow communication between virtual machines connected to the same switch and also allow communication to the Hyper-V Parent Partition. You can create any number of internal virtual switches
  • External Virtual Network Switch allows communication between virtual machines running on the same Hyper-V Server, Hyper-V Parent Partition and Virtual Machines running on the remote Hyper-V Server. It requires a physical network adapter on the Hyper-V Host that is not mapped to any other External Virtual Network Switch. As a result, you can create External virtual switches as long as you have physical network adapters that are not mapped to any other external virtual switches.

Follow the guide lines to configure Virtual Networking in Windows Server 2012 R2 and Windows Server 2016 Hyper-v role installed. A highly available clustered Hyper-v server should have the following configuration parameters or similar.

Example VLAN

Network Type VLAN ID IP Addresses
Default 1
Management 2
Live Migration 3
Prod Server 4
Dev Server 5
Test Server 6
Storage 7
DMZ 99

Example NIC Configuration with 8 network card (e.g. 2x quad NIC card)

Virtual Network Name Purpose Connected Physical Switch Port Virtual Switch Configuration
MGMT Management Network Port configured with VLAN 2 Allow Management Network ticked

Enable VLAN identification for management operating system ticked

LiveMigration Live Migration Port configured with VLAN 3 Allow Management Network un-ticked

Enable VLAN identification for management operating system ticked

iSCSI Storage Port configured with VLAN 7 Allow Management Network un-ticked

Enable VLAN identification for management operating system ticked

VirtualMachines Prod, Dev, Test, DMZ Port configured with Trunk Mode Allow Management Network un-ticked

Enable VLAN identification for management operating system un-ticked


  • Do not assign VLAN ID in NIC Teaming Wizard instead assign VLAN ID in Virtual Switch Manager.
  • Configure virtual switch network as External Virtual Network.
  • Configure Physical Switch Port Aggregation using EtherChannel.
  • Configure Logical Network Aggregation using NIC Teaming Wizard.
  • Enable VLAN ID in Virtual Machine Settings.

Example Virtual Machine Network Configuration

Virtual Machine Type VLAN ID Tagged in VM>Settings>Network Adapter Enable VLAN identifier Connected Virtual Network
Prod VM 4 Ticked VirtualMachines
Dev VM 5 Ticked VirtualMachines
Test VM 6 Ticked VirtualMachines
DMZ VM with two NICs 4, 99 Ticked VirtualMachines


NIC Teaming with Virtual Switch

Multiple network adapters on a computer to be placed into a team for the following purposes:

  • Bandwidth aggregation
  • Traffic failover to prevent connectivity loss in the event of a network component failure

There are two basic configurations for NIC Teaming plus new feature in Windows Server 2016.

  • Switch-independent teaming. This configuration does not require the switch to participate in the teaming. Since in switch-independent mode the switch does not know that the network adapter is part of a team in the host, the adapters may be connected to different switches. Switch independent modes of operation do not require that the team members connect to different switches; they merely make it possible.
  • Switch-dependent teaming. This configuration that requires the switch to participate in the teaming. Switch dependent teaming require participating NIC to be connected in same physical switch. There are two modes of operation for switch-dependent teaming: Generic or static teaming (IEEE 802.3ad draft v1). Link Aggregation Control Protocol teaming (IEEE 802.1ax, LACP).
  • Switch-embedded Teaming enables the Hyper-V virtual switch to directly control multiple physical network adapters simultaneously. Compare and contrast this with the method used in 2012 and 2012 R2, in which a single Hyper-V virtual switch can only control a single physical or logical adapter. (This feature is in Windows Server 2016 and more on it on this link ).

Load Balancing Algorithm

NIC teaming in Windows Server 2012 R2 and 2016 supports the following traffic load distribution algorithms:

  • Hyper-V switch port. Since VMs have independent MAC addresses, the VM’s MAC address or the port it’s connected to on the Hyper-V switch can be the basis for dividing traffic.
  • Address Hashing. This algorithm creates a hash based on address components of the packet and then assigns packets that have that hash value to one of the available adapters. Usually this mechanism alone is sufficient to create a reasonable balance across the available adapters.
  • Dynamic. This algorithm takes the best aspects of each of the other two modes and combines them into a single mode. Outbound loads are distributed based on a hash of the TCP Ports and IP addresses. Dynamic mode also rebalances loads in real time so that a given outbound flow may move back and forth between team members. Inbound loads are distributed as though the Hyper-V port mode was in use.

NIC Teaming within Virtual Machine

NIC teaming in Windows Server 2012 R2 and 2016 may also be deployed in a VM. This allows a VM to have virtual NICs connected to more than one Hyper-V switch and still maintain connectivity even if the physical NIC under one switch gets disconnected.

To enable NIC Teaming with virtual machine. In the Hyper-V Manager, in the settings for the VM, select the VM’s NIC and the Advanced Settings item, then enable the checkbox for NIC Teaming in the VM.

Physical Switch Configuration

  • In Trunk Mode, a virtual switch will listen to all the network traffic and forward the traffic to all the ports. In other words, network packets are sent to all the virtual machines connected to it. By default, a virtual switch in Hyper-V is configured in Trunk Mode, which means the virtual switch receives all network packets and forwards them to all the virtual machines connected to it. There is not much configuration needed to configure the virtual switch in Trunk Mode.
  • In Access Mode, the virtual switch receives network packets in which it first checks the VLAN ID tagged in the network packet. If the VLAN ID tagged in the network packet matches the one configured on the virtual switch, then the network packet is accepted by the virtual switch. Any incoming network packet that is not tagged with the same VLAN ID will be discarded by the virtual switch.

This is brief overview and I hope it helps you in understanding this better. If you want a more detailed info on NIC and Switch Embedded Teaming check this great guide: Windows Server 2016 Technical Preview NIC and Switch Embedded Teaming User Guide.








How to Migrate configured DHCP from Windows Server 2008R2 to Server 2016


SCOM 2016 Management Packs for Windows Server 2016

1 Comment

  1. Hello!
    Thanks for the extensive write-up!
    I think there might be an oversight in the suggested VLAN configuration. I’m confused about the part where you mention to untick the box to share an interface assigned to a vswitch with the host, and tick the box to enable a VLAN for the management OS. That does not seem to be possible as that second checkbox is disabled when the interface is not shared with the host.
    This would as far as I know not produce the desired result as this would have no influence on the VLAN used by VMs using the vswitch, but only for the communication from the host OS.
    If you do not mind setting the VLAN on the VM properties, there is no need to configure the VLAN on either the vswitch or the physical NIC interfaces or teams on the host.
    I am still looking for a solution to be able to use separate vswitches on a single trunked interface (with tagged team interfaces) without having to set the VLAN in the vm properties, but it looks like Hyper-V is just not able to do that…

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén