On Hyper V in Windows 10 we will create 4 VMs as we planned before and after VMs are created we must enable nested virtualization for our future cluster nodes.
Set-VMProcessor -VMName SRV001-A1 -ExposeVirtualizationExtensions $true
After that we need to install Windows Server 2016 and I suggest we use VM template for faster deployment as described in post earlier. Do not forget to enable MAC address spoofing in order for network packets to be routed through two virtual switches, MAC address spoofing must be enabled on the first level of virtual switch.
First VM will be our Domain Controler, DHCP, DNS and we need to setup all those services (detailed info on this procedure). Second VM will be our SAN VM and for this we will use StarWind Virtual SAN. Third and Fourth Server will be our future Hyper V host cluster nodes and for time being they must be joined to domain and given appropriate names and IP addresses.
Network interfaces on cluster nodes will be configured later with PowerShell script since I elected to team all 4 NICs using the minimum bandwidth setting for Management, Cluster, ISCSI, VM, and LiveMigration traffic. I highly recommend that you take a few moments to watch John Savill’s discussion on this method of teaming: Using NIC Teaming and a virtual switch for Windows Server 2012 host networking
Install StarWind iSCSI SAN software on SAN VM
After we reviewed and verified the requirements, we can easily start installing StarWind iSCSI SAN software, which can be downloaded in trial-mode. This represents the simplest step in our list, since the installation does not have any complex step.
After the installation is complete we can access our console and we will see as a first step necessary is to configure the “Storage pool” necessary.
We must select the path for the hard drive where we are going to store the LUNs to be used in our shared storage scenario.
Configure and create LUNs in StarWind iSCSI SAN
When we have the program installed, we can start managing it from the console and we will see the options are quite intuitive.
We are going to split the configuration section in two parts: Hosting iSCSI LUNs with StarWind iSCSI SAN and configuring our iSCSI initiator on each Windows Server 2016 host in the cluster.
Hosting iSCSI LUNs with StarWind iSCSI SAN
We are going to review the basic steps to configure the StarWind iSCSI to start hosting LUNs for our cluster; the initial task is to add the host:
Select the “Connect” option for our local server.
With the host added, we can start creating the storage that will be published through iSCSI: Right-click the server and select “Add target” and a new wizard will appear.
Select the “Target alias” from which we’ll identify the LUN we are about to create and then configure to be able to cluster. The name below will show how we can identify this particular target in our iSCSI clients. Click on “Next” and then “Create”.
With our target created we can start creating “devices” or LUNs within that target. Click on “Add Device”.
Select “Hard Disk Device”.
Select “Virtual Disk”. The other two possibilities to use here are “Physical Disk” from which we can select a hard drive and work in a “pass-through” model.
And “RAM Disk” is a very interesting option from which we can use a block of RAM to be treated as a hard drive or LUN in this case. Because the speed of RAM is much faster than most other types of storage, files on a RAM disk can be accessed more quickly. Also because the storage is actually in RAM, it is volatile memory and will be lost when the computer powers off.
In the next section we can select the disk location and size. In my case I’m using C:\ drive and 20 GB.
Since this is a virtual disk, we can select from either thick-provision (space is allocated in advance) or thin-provision (space is allocated as is required). Thick provisioning could represent, for some applications, as a little bit faster than thin provisioning.
The LSFS options we have available in this case are: “Deduplication enabled” (procedure to save space since only unique data is stored, duplicated data are stored as links)
In the next section we can select if we are going to use disk caching to improve performance for read and writes in this disk. The first opportunity we have works with the memory cache, from which we can select write-back (asynchronous, with better performance but more risk about inconsistencies), write-through (synchronous, slow performance but no risk about data inconsistency) or no cache at all.
Using caching can significantly increase the performance of some applications, particularly databases, that perform large amounts of disk I/O. High Speed Caсhing operates on the principle that server memory is faster than disk. The memory cache stores data that is more likely to be required by applications. If a program turns to the disk for data, a search is first made for the relevant block in the cache. If the block is found the program uses it, otherwise the data from the disk is loaded into a new block of memory cache.
StarWind v8 adds a new layer in the caching concept, using L2 cache. This type of cache is represented in a virtual file intended to be placed in SSD drives, for high-performance. In this section we have the opportunity to create an L2 cache file. Also, we will need to select a path for the L2 cache file.
Click on “Finish” and the device will be ready to be used.
In my case I’ve also created a second device in the same target which will serve as witness.
Configure Windows Server 2016 iSCSI Initiator
Each host must have access to the file we’ve just created in order to be able to create our Failover Cluster. On each host, execute the following:
Access “Administrative Tools”, “iSCSI Initiator”.
We will also receive a notification about “The Microsoft iSCSI service is not running”, click “Yes” to start the service.
In the “Target” pane, type in the IP address used for the target host, our iSCSI server, to receive the connections. Remember to use the IP address dedicated to iSCSI connections, if the StarWind iSCSI SAN server also has a public connection we can also use it, but the traffic will be directed using that network adapter.
Click on “Quick Connect” to be authorized by the host to use these files.
Once we’ve connected to the files, access “Disk Management” to verify we can now use these files as storage attached to the operating system.
And as a final step, just using the first host in the cluster, put “Online” the storage file and select also “Initialize Disk”. Since these are treated as normal hard disks, the process for initializing a LUN is no different than initializing a physical and local hard drive in the server.
Instal Hyper V role on cluster host nodes
Install the Hyper-V role by using Server Manager, In Server Manager, on the Manage menu, click Add Roles and Features. On the Select server roles page, select Hyper-V and you leave default settings for virtual switch. After you installed Hyper V on both host nodes on one node you can deploy one test VM.
Now, let’s take a look about the Failover Cluster feature.
Failover Cluster Network
Now we can prepare network for our cluster nodes for this I will be using PowerShell. PowerShell commands below should be run as Administrator on both cluster nodes.
This first command will loop through and get all available NICs and add them to the team also adding load balancing :
$NICname = Get-NetAdapter | %{$_.name} New-NetLbfoTeam -Name Hyp1Team –TeamMembers $NICname -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort -Confirm:$false
Create new Switch :
New-VMSwitch -Name HypVSwitch –NetAdapterName Hyp1Team –MinimumBandwidthMode Weight –AllowManagementOS $false
Create vNICs on VSwitch :
# Management1 vNIC Add-VMNetworkAdapter –ManagementOS –Name Management1 –SwitchName HypVSwitch # Rename the adapter so we can keep everything straight when troubleshooting Rename-NetAdapter -Name "vEthernet (Management1)" -NewName Management1 # IP/subnet New-NetIPAddress -InterfaceAlias Management1 -IPAddress 192.168.2.12 -PrefixLength 24 -DefaultGateway 192.168.2.1 -Confirm:$false Set-DnsClientServerAddress -InterfaceAlias Management1 -ServerAddresses 192.168.2.1 # minimum QoS weighting Set-VMNetworkAdapter -ManagementOS -name Management1 -MinimumBandwidthWeight 10 # Cluster1 vNIC Add-VMNetworkAdapter –ManagementOS –Name Cluster1 –SwitchName HypVSwitch # Rename the adapter so we can keep everything straight when troubleshooting Rename-NetAdapter -Name "vEthernet (Cluster1)" -NewName Cluster1 # IP/subnet New-NetIPAddress -InterfaceAlias Cluster1 -IPAddress 10.0.2.20 -PrefixLength 24 -Confirm:$false # minimum QoS weighting Set-VMNetworkAdapter -ManagementOS -name Cluster1 -MinimumBandwidthWeight 15 # iSCSI1 vNIC Add-VMNetworkAdapter –ManagementOS –Name iSCSI1 –SwitchName HypVSwitch # Rename the adapter so we can keep everything straight when troubleshooting Rename-NetAdapter -Name "vEthernet (iSCSI1)" -NewName iSCSI1 # IP/subnet New-NetIPAddress -InterfaceAlias iSCSI1 -IPAddress 10.0.1.20 -PrefixLength 24 -Confirm:$false # minimum QoS weighting Set-VMNetworkAdapter -ManagementOS -name iSCSI1 -MinimumBandwidthWeight 30 # VM1 vNIC Add-VMNetworkAdapter –ManagementOS –Name VM1 –SwitchName HypVSwitch # Rename the adapter so we can keep everything straight when troubleshooting Rename-NetAdapter -Name "vEthernet (VM1)" -NewName VM1 # IP/subnet New-NetIPAddress -InterfaceAlias VM1 -IPAddress 10.0.3.20 -PrefixLength 24 -Confirm:$false # minimum QoS weighting Set-VMNetworkAdapter -ManagementOS -name VM1 -MinimumBandwidthWeight 30 # LM1 vNIC Add-VMNetworkAdapter –ManagementOS –Name LM1 –SwitchName HypVSwitch # Rename the adapter so we can keep everything straight when troubleshooting Rename-NetAdapter -Name "vEthernet (LM1)" -NewName LM1 # IP/subnet New-NetIPAddress -InterfaceAlias LM1 -IPAddress 10.0.4.20 -PrefixLength 24 -Confirm:$false # minimum QoS weighting Set-VMNetworkAdapter -ManagementOS -name LM1 -MinimumBandwidthWeight 15
I created VNic for Management, Cluster, ISCSI, VM, and LiveMigration and added QoS weighting. This should be repeated on second cluster node changing appropriate names and IP addresses. It should look like image below.
Install Failover Cluster feature and Run Cluster Validation
Prior to configure the cluster, we need to enable the “Failover Cluster” feature on all hosts in the cluster and we’ll also run the verification tool provided by Microsoft to validate the consistency and compatibility of our scenario.
In “Server Manager”, access the option “Add Roles and Features”.
Start the wizard, do not add any role in “Server Roles”. And in “Features” enable the “Failover Clustering” option.
Once installed, access the console from “Administrative Tools”. Within the console, the option we are interested in this stage is “Validate a Configuration”.
In the new wizard, we are going to add the hosts that will represent the Failover Cluster in order to validate the configuration. Type in the server’s FQDN names or browse for their names; click on “Next”.
Select “Run all tests (recommended)” and click on “Next”.
In the following screen we can see a detailed list about all the tests that will be executed, take note that the storage tests take some time; click on “Next”.
If we’ve fulfilled the requirements reviewed earlier then the test will be completed successfully. In my case the report generated a warning, but the configuration is supported for clustering.
Accessing the report we can get a detailed information.
Leaving the option “Create the cluster now using the validated nodes” enabled will start the “Create Cluster” as soon as we click “Finish”.
Create Windows Server 2016 Failover Cluster
At this stage, we’ve completed all the requirements and validated our configuration successfully. In the next following steps, we are going to see the simple procedure to configure our Windows Server 2016 Failover Cluster.
In the “Failover Cluster” console, select the option for “Create a cluster”.
A similar wizard will appear as in the validation tool. The first thing to do is add the servers we would like to cluster; click on “Next”.
In the next screen we have to select the cluster name and the IP address assigned. Remember that in a cluster, all machines are represented by one name and one IP.
In the Confirmation page click on “Next”.
After a few seconds the cluster will be created and we can also review the report for the process.
Now in our Failover Cluster console, we’ll get the complete picture about the cluster we’ve created: Nodes involved, storage associated to the cluster, networks and the events related to cluster.
The default option for a two-node cluster is to use a disk as a witness to manage cluster quorum. This is usually a disk we assign the letter “Q:\” and does not store a large amount of data. The quorum disk stores a very small information containing the cluster configuration, its main purpose is for cluster voting.
To perform a backup for the Failover Cluster configuration we only need to backup the Q:\ drive. This, of course, does not backup the services configured in the Failover Cluster.
Cluster voting is used to determine, in case of a disconnection, which nodes and services will be online. For example, if a node is disconnected from the cluster and shared storage, the remaining node with one vote and the quorum disk with also one vote decides that the cluster and its services will remain online.
This voting is used as a default option but can be modified in the Failover Cluster console. Modifying it depends and is recommended in various scenarios: Having an odd number of nodes, this case will be required to use as a “Node Majority” quorum; or a cluster stretched in different geographically locations will be recommended to use an even number of nodes but using a file share as a witness in a third site.
For more information about quorums in Windows Failover clusters, review the following Microsoft TechNet article: “Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster”.
Cluster Shared Volumes
To help with live migration, the next step is to configure Cluster Shared Volumes (CSVs). Server 2016 CSVs are enabled by default. However, you still need to tell the cluster which storage should be used for the CSVs. To enable a CSV on an available disk, expand the Storage node and select the Disks node. Next, select the cluster disk that you want to use as a CSV and click the Add to Cluster Shared Volumes option in the Failover Cluster Manager’s Actions pane or just right click on disk and select same option, as you see in image
Disk after it is added to Cluster Shared Volumes.
Behind the scenes, Failover Cluster Manager configures the cluster disk’s storage for CSV, which includes adding a mount point in the system drive. In my example, I enabled CSVs on Cluster Disk 1 which added the following mount points:
- C:\ClusterStorage\Volume1
Conclusion and highly available VM
At this point, the two-node Server 2016 cluster has been built and CSVs have been enabled. Next, you can install clustered applications or add roles to the cluster. In my case, I’m building the cluster for virtualization support, so my next step is to add the Virtual Machine role to the cluster.
To add a new role, select the cluster name in Failover Cluster Manager’s navigation pane and click the Configure Roles link in the Actions pane to launch the High Availability wizard. Click Next on the welcome page to go to the Select Role page. Scroll through the list of roles until you see the Virtual Machine role, as you see in image below. Select that role and click Next.
On the Select Virtual Machine page, all the VMs on all the cluster nodes will be listed, as shown in image. Select the VMs that you want to be highly available. Click Next. After confirming your selections, click Next to add the Virtual Machine roles to Failover Cluster Manager.
With this we added VM to cluster and our nested cluster is hosting fully functional VM on CSV making it highly available.
All of this is on our Windows 10 workstation :). That is very cool but lets try to do it better with Nano Server and that will be our Scenario 2 in next post.
Asturmark
This is great!!!!!
I was stuck to create a valid CVS disk now I got the idea
thanks a lot for posting 🙂
saphir
thanks you so much
i have a question , how do Hyper-v know to use LM-nic for live migration , do we need to precise it for Hyper-v .