Nested Virtualized Windows Server 2016 Hyper-V Cluster LAB – Scenario 2

This Scenario will be lot quicker and we will be using PowerShell since Nano Server is without GUI and it will definitely reflect on footprint and size of cluster.

I recommend you read Nano Server – Deployment since I will be using that procedure for creating script that will automate creation of Nano Servers.

In this scenario we will have 3 VMs and first one same as in Scenario 1 will be VM with our Domain Controler, DHCP, DNS and we will need to setup all those services on that VM(detailed info on this procedure). This time we will also need to install Failover Cluster Manager so we can manage Nano servers since they don’t have any GUI. Second and Third will be our Nano Servers that we will create with script below but before we do it we need to mount Windows Server 2016 ISO  and copy NanoServer folder to our C: drive.

Nano Server and VM creation

$ServerNodes = @("Aldin01","Aldin02")
$vSwitchName = "Virtual Switch"
$IP = 170

cd C:\NanoServer
Import-Module C:\NanoServer\NanoServerImageGenerator\NanoServerImageGenerator.psm1 -verbose

$Adminpw = ConvertTo-SecureString -String 'Test123!' -AsPlainText -Force

#Create Nano Nodes

$ServerNodes | % {
    
    #Create Nano Server Image
    New-NanoServerImage -MediaPath D:\ -BasePath "C:\NanoServerTemp\$_" -TargetPath "C:\LocalVMs\$_\$_.vhd" -InterfaceNameOrIndex Ethernet -Ipv4Address 192.168.2.$IP -Ipv4SubnetMask 255.255.255.0 -Ipv4Gateway 192.168.2.1 -Clustering -Compute -Storage -ComputerName "$_" -AdministratorPassword $adminpw -DeploymentType Guest -Edition Datacenter -DomainName taamneh.com -ReuseDomainNode -EnableRemoteManagementPort
    
    #Create new VM with Nano Image as base disk
    New-VM -Name $_ -MemoryStartupBytes 4096MB -BootDevice VHD -VHDPath "C:\LocalVMs\$_\$_.vhdx" -SwitchName $vSwitchName -Path "C:\LocalVMs\$_" -Generation 2
    Set-VM -Name $_ -ProcessorCount 2

    #Enable nested virtualization
    Set-VMProcessor -VMName $_ -ExposeVirtualizationExtensions $true

    #Add network Interfaces
    Add-VMNetworkAdapter -VMName $_ -SwitchName $vSwitchName -DeviceNaming On
    
	#Enable MAC Spoofing
    Get-VMNetworkAdapter -VMName $_ | Set-VMNetworkAdapter -MacAddressSpoofing On -AllowTeaming On
	

    #Start VM
    Start-VM -Name $_ 
    
    #Remove temp files
    Remove-Item "C:\NanoServerTemp\$_" -Force -recurse
	$IP++
}

This script creates a Nano Server VHD image for each machine called Aldin01, Aldin02. I set also the domain and the IP address. I add cluster feature, guest drivers, storage and Hyper-V features. Script also creates two virtual machines called Aldin01, Aldin02. These Virtual Machines will be stored in C:\LocalVMs\. These Virtual Machines are with 2 vCPU and 4GB of static memory. Then I add a second network adapter to make a teaming inside the Virtual Machines (with Switch Embedded Teaming). So I enable Mac Spoofing and the Teaming on Virtual Network Adapters.

Virtual Disks

$ServerNodes = @("Aldin01","Aldin02")
#Create VHDX
Foreach ($s in $ServerNodes)
{
    New-VHD -Path "C:\LocalVMs\$s\ssd1.vhdx" -SizeBytes 10GB -Dynamic
    New-VHD -Path "C:\LocalVMs\$s\ssd2.vhdx" -SizeBytes 10GB -Dynamic
    New-VHD -Path "C:\LocalVMs\$s\HDD1.vhdx" -SizeBytes 20GB -Dynamic
    New-VHD -Path "C:\LocalVMs\$s\HDD2.vhdx" -SizeBytes 20GB -Dynamic
    New-VHD -Path "C:\LocalVMs\$s\HDD3.vhdx" -SizeBytes 20GB -Dynamic
    New-VHD -Path "C:\LocalVMs\$s\HDD4.vhdx" -SizeBytes 20GB -Dynamic
}


#Attach VHDX
Foreach ($s in $ServerNodes)
{
    Add-VMHardDiskDrive -VMName $s -Path "C:\LocalVMs\$s\ssd1.vhdx" -ControllerType SCSI
    Add-VMHardDiskDrive -VMName $s -Path "C:\LocalVMs\$s\ssd2.vhdx" -ControllerType SCSI
    Add-VMHardDiskDrive -VMName $s -Path "C:\LocalVMs\$s\HDD1.vhdx" -ControllerType SCSI
    Add-VMHardDiskDrive -VMName $s -Path "C:\LocalVMs\$s\HDD2.vhdx" -ControllerType SCSI
    Add-VMHardDiskDrive -VMName $s -Path "C:\LocalVMs\$s\HDD3.vhdx" -ControllerType SCSI
    Add-VMHardDiskDrive -VMName $s -Path "C:\LocalVMs\$s\HDD4.vhdx" -ControllerType SCSI
}

We can simulate HDDs by placing the VHDX files respectively on local SSD or HDD drives.

Network

To configure network, I will leverage PowerShell Direct. To use it, just run Enter-PSSession –VMName <VMName> -Credential <VMName>\Administrator. Once you are connected to the system, you can configure it. I have written one script to configure each Nano server. The below script create a Switch Embedded Teaming, set the IP Addresses, enable RDMA (more on RDMA).

$username = "Aldin01\Administrator"
$password = "Test123!"
$secstr = New-Object -TypeName System.Security.SecureString
$password.ToCharArray() | ForEach-Object {$secstr.AppendChar($_)}
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username, $secstr

Enter-PSSession –VMName Aldin01 –Credential $cred
New-VMSwitch -Name Management -EnableEmbeddedTeaming $True -AllowManagementOS $True -NetAdapterName "Ethernet", "Ethernet 2"
# Add Virtual NICs for Storage, Cluster and Live-Migration
Add-VMNetworkAdapter -ManagementOS -Name "Storage" -SwitchName Management
Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName Management
Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName Management
	
	
$IPMgmt = 170
$IPSto = 11
$IPLM = 170
$IPClust = 170


Enable-NetAdapterRDMA -Name "vEthernet (Storage)"
Enable-NetAdapterRDMA -Name "vEthernet (LiveMigration)"	
	
netsh interface ip set address "vEthernet (Management)" static 192.168.2.$IPMgmt 255.255.255.0 192.168.2.1
netsh interface ip set dns "vEthernet (Management)" static 192.168.2.1
netsh interface ip set address "vEthernet (Storage)" static 10.0.1.$IPSto 255.255.255.0
netsh interface ip set address "vEthernet (LiveMigration)" static 10.0.3.$IPLM 255.255.255.0
netsh interface ip set address "vEthernet (Cluster)" static 10.0.2.$IPClust 255.255.255.0
netsh interface ip set dns "vEthernet (Management)" static 192.168.2.1
netsh interface ipv4 show interfaces
Exit

Use this script on both Nano Servers and change variables as needed.

Cluster

First of all, I run a Test-Cluster to verify if my nodes are ready to be part of a Storage Spaces Direct Cluster :

#Validate cluster
Test-Cluster -Node "Aldin01", "Aldin02" -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"

If you get any error you can check test report.

So let’s start the cluster creation:

#Create Cluster
New-Cluster –Name AldinNanoCluster –Node Aldin01, Aldin02 –NoStorage –StaticAddress 192.168.2.174

Then the cluster is formed:

Now we can configure the Networks. I start by changing the name and set the Storage Network’s role by Cluster and Client.

(Get-ClusterNetwork -Cluster AldinNanoCluster -Name "Cluster Network 1").Name="Management"
(Get-ClusterNetwork -Cluster AldinNanoCluster -Name "Cluster Network 2").Name="Storage"
(Get-ClusterNetwork -Cluster AldinNanoCluster -Name "Cluster Network 3").Name="Cluster"
(Get-ClusterNetwork -Cluster AldinNanoCluster -Name "Cluster Network 4").Name="Live-Migration"
(Get-ClusterNetwork -Cluster AldinNanoCluster -Name "Storage").Role="ClusterAndClient"

Then I change the Live-Migration settings in order that the cluster use Live-Migration network for  Live-Migration usage. I don’t use Powershell for this step because It is easier to make it by using the GUI.

Then I configure a witness by using the file share witness and files share I prepared earlier but in this place you can use new feature in Windows Server 2016 the Cloud Witness.

#Configure FileShare Witness
Set-ClusterQuorum -Cluster AldinNanoCluster -NodeAndFileShareMajority \\HYPERV\Witness

Enable Storage Spaces Direct

We enable it here, to see all local disks from all nodes, because for virtual lab we have to create the Storage Pool manually.

#Enable Storage Spaces Direct
$ServerNodes = @("Aldin01","Aldin02")
Enable-ClusterStorageSpacesDirect -Autoconfig $false -SkipEligibilityChecks -CimSession $ServerNodes[0] -Confirm:$false
Create Storage Pool

This would be automatically done by Enable-ClusterStorageSpacesDirect . But as we have virtual disks, we have to manually create the pool first and set (override) the “physical” disk media type property

#Create the Storage Pool
New-StoragePool -StorageSubSystemName AldinNanoCluster.taamneh.com -FriendlyName "S2D Pool" -WriteCacheSizeDefault 0 -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem -Name AldinNanoCluster.taamneh.com | Get-PhysicalDisk)

If I come back in the Failover Cluster GUI, I have a new Storage Pool called S2D Pool.

Then I create one volume in mirroring. It has 1 column. Volume is formatted in ReFS and size is maximized.

New-Volume -StoragePoolFriendlyName "S2D Pool" -FriendlyName VMStorage01 -NumberOfColumns 1 -PhysicalDiskRedundancy 1 -FileSystem CSVFS_REFS -UseMaximumSize

The more columns means more performance because multiple disks will be engaged at once in Read/Write operations, but it’s also limited in flexibility with expanding existing virtual disks, especially in tiered scenarios.

Typically the column count will be equal to the number of physical disks of the storage space (for simple spaces) or half of the number of disks (for mirror spaces). The column count can be lower than the number of physical disks but never higher.

The more is better in terms of performance, but the less is better in terms of flexibility for future expansion.

Now I have Cluster Virtual Disk and it is mounted on C:\ClusterStorage\Volume1.

The Interleave parameter represents the amount of data written to a single column per stripe. The default Interleave value is 262,144 bytes (256 KB).

So I create a Virtual Machines and I use C:\ClusterStorage to host VMs files and this is it.

Conclusion

This scenario is a great flexible solution. As you have seen above it is not so hard to install. However, I think this solution needs a real hard work to size it. It is necessary to have strong knowledge on Storage Space, networks and Hyper-V. In my opinion, Nano Servers are a great solution for this kind of solution because of the small footprint on disks and compute resources.

And you can do all crazy stuff inside….like from screenshot below where I created Hypervisor VM on my Windows 10 workstation and then created Nano Server Hyper V cluster inside it with storage spaces configured 🙂

Inception comes to mind when you see this image above. It was very interesting lab to tinker with and I hope you will find this posts useful. If you find some stuff missing or you have some suggestion how this lab can be done better or be upgraded please write a comment or drop me mail.

 

Previous

Nested Virtualized Windows Server 2016 Hyper-V Cluster LAB – Scenario 1

Next

How to Migrate configured DHCP from Windows Server 2008R2 to Server 2016

3 Comments

  1. Great article and nice job!!! I added your blog in my favorites sites.

  2. I had trouble using the first script with the Nano Image and VMS’ creations so I had to create them manually and join them as well, the script kep telling me that such a domain does not exist (my domain of course).

    After the creation of the virtual machines I used the second script to add all the VHD’s and it worked.

    Both Nano Servers are domain joined and have two network adapters with mac spoofing enabled and are using a virtual switch configured in Internal mode.

    My questions are:

    1. How do I use your third script for the network part so it would work with an Internal switch?

    2. It seems it creates another virtual switch in external mode with os managenment, can it be configured differently for a lab by using an internal one?

    • Aldin Taamneh

      Hi Kosta,

      I will try to answer to your questions and help you :

      1. I am sorry but you can not use it with internal switch as script is written to utilize Switch Embedded Teaming and If you want to use NIC Teaming in a VM, you must connect the virtual network adapters in the VM to external Hyper-V Virtual Switches only; virtual network adapters that are connected to internal or private Hyper-V Virtual Switches are not able to connect to the switch when they are in a team, and networking fails for the VM.

      2. Idea was to use Switch Embedded Teaming and enable RDMA for performance boost and that is my recommendation but I guess that you can rewrite huge portion of script to work in different setup but you must be aware of cluster network recommendation if you are attempting to create cluster and read this article I think it will help you better understand all this.

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén