So, here is the list of updated MP’s for Windows Server 2016, not all seems to be done but Kevin Holman has a nice list with URL’s for those that have been released so far.
Check this site for more info:
So, here is the list of updated MP’s for Windows Server 2016, not all seems to be done but Kevin Holman has a nice list with URL’s for those that have been released so far.
Check this site for more info:
If you checked my posts before and especially LAB scenarios maybe you noticed I didn’t tag VLANs and reason is I didn’t want to complicate things and I know that some people have problem with understanding how it all works in comparison with physical switches and all. This post is meant to help clear that and I hope it will help in better understanding of Hyper-V Virtual Switch.
Network virtualization provides multiple virtual network infrastructures run on the same physical network with or without overlapping IP addresses. Each virtual network infrastructure operates as if they are the only virtual network running on the shared network infrastructure. Hyper-v Network Virtualization also decouples physical network from virtual network.
On Hyper V in Windows 10 we will create 4 VMs as we planned before and after VMs are created we must enable nested virtualization for our future cluster nodes.
Set-VMProcessor -VMName SRV001-A1 -ExposeVirtualizationExtensions $true
After that we need to install Windows Server 2016 and I suggest we use VM template for faster deployment as described in post earlier. Do not forget to enable MAC address spoofing in order for network packets to be routed through two virtual switches, MAC address spoofing must be enabled on the first level of virtual switch.
First VM will be our Domain Controler, DHCP, DNS and we need to setup all those services (detailed info on this procedure). Second VM will be our SAN VM and for this we will use StarWind Virtual SAN. Third and Fourth Server will be our future Hyper V host cluster nodes and for time being they must be joined to domain and given appropriate names and IP addresses.
Network interfaces on cluster nodes will be configured later with PowerShell script since I elected to team all 4 NICs using the minimum bandwidth setting for Management, Cluster, ISCSI, VM, and LiveMigration traffic. I highly recommend that you take a few moments to watch John Savill’s discussion on this method of teaming: Using NIC Teaming and a virtual switch for Windows Server 2012 host networking
Hyper Converged infrastructure is based on servers where disks are Direct-Attached Storage (DAS) connected internally or by using a JBOD tray. Each server (at least four to implement Storage Space Direct) has their own storage devices. So there are no shared disks or JBODs.
Hyper Converged infrastructure is based on known features as Failover Cluster, Cluster Shared Volume, and Storage Space. However, because storage devices are not shared between each node, we need something more to create a Clustered Storage Space with DAS devices. This is called Storage Space Direct. Below you can find the Storage Spaces Direct stack.
On network side, Storage Space Direct leverage at least 10G networks RDMA capable. This is because replications that occur though Software Storage Bus need low latency that RDMA provides.
More on this link: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview?f=255&MSPPError=-2147217396
Well after quite a few deployments of new Server I noticed that Windows update will throw warring and notification “Your device is scheduled to restart outside of active hours” even do you set it not to do so – it is set do DownloadOnly.
It seems that this is a known bug in the Windows Update Settings UI in which the text does not correctly reflect the configuration of your Windows Update settings. MS Server Team will make fix soon but for now if you don’t want your server to automatically restart this is what you need to check and configure :
To determine what updates your machine has already installed, follow these steps:
So, if you are an IT Pro you probably know that with Microsoft Server 2016 comes with something called Nano Server.
In my opinion it is the best of new thing that Server 2016 have. To see what is new in Windows Server 2016 check out this link on TechNet and if you have more time available there are a series of really good Microsoft Virtual Academy resources here.
Over the last few years the Microsoft tried to downsize their Windows Server product and answer on many issues that users had.
The first attempt to resolve these issues was in Server Core released as an installation option in Windows Server 2008. A command line only version of the Server OS that can be managed remotely and to a limited degree from a direct console This ‘server core’ did away with a lot of extraneous ‘stuff’ and meant fewer updates, smaller images and smaller, quicker installations.
But this was not enough and so the Server Product team in Microsoft went back to the drawing board and produced a deployment option now known as Nano server. This cannot be installed from the DVD or ISO, it has to be installed using PowerShell and each individual image built up to only contain the roles and services that are required for that particular server.
Get-VM * | Format-Table Name, Version
You can also see the configuration version in Hyper-V Manager by selecting the virtual machine and looking at the Summary tab.
This topic aims to explain the Quorum configuration in a Failover Clustering.
A Failover Cluster Quorum configuration specifies the number of failures that a cluster can support in order to keep working. Once the threshold limit is reached, the cluster stops working. The most common failures in a cluster are nodes that stop working or nodes that can’t communicate anymore.
Imagine that quorum doesn’t exist and you have two-nodes cluster. Now there is a network problem and the two nodes can’t communicate. If there is no Quorum, what prevents both nodes to operate independently and take disks ownership on each side? This situation is called Split-Brain. Quorum exists to avoid Split-Brain and prevents corruption on disks.
The Quorum is based on a voting algorithm. Each node in the cluster has a vote. The cluster keeps working while more than half of the voters are online. This is the quorum (or the majority of votes). When there are too many of failures and not enough online voters to constitute a quorum, the cluster stop working.
When a mailbox database copy has failed in an Exchange Server 2010 Database Availability Group (DAG) it may be necessary to reseed the mailbox server with the failed database copy.
Powered by WordPress & Theme by Anders Norén