Windows Server Technical Preview


The recent release of Microsoft Windows Server 2016 TP5 offers a closer look at the likely capabilities and features coming to the final release of the server OS, which, among many new features, includes an improved version of its Hyper-V virtualization technology. I say likely because TP5 is apparently feature-complete; it’s also the last preview before release to manufacturing (RTM), expected sometime in the second half of this year. Among the many new features in Windows Server 2016, many IT pros are looking to see new capabilities in Hyper-V.

While Hyper-V has been around since 2008, Microsoft has taken a new approach to enhancing it with this release, according to Ben Armstrong, principal program manager of the Microsoft Hyper-V team. Armstrong discussed some of the planned new Hyper-V features in an interview late last year with Redmond sister publication Virtualization Review.
Simply put, in the past, Microsoft would develop Windows Server “in the back room” with a public beta late in the development cycle, more for fixing bugs than as a source of ideas for improvements or new features. This version of Windows, on the other hand, has been “in the open” since TP1 back in October 2014, all the way through to TP5, released April 27.

Overview of the New Hyper-V
Given the long development time, expectations are high. Fortunately, this version doesn’t disappoint, with many new features, including:

New types of checkpoints
A new backup platform
Rolling cluster upgrades
Virtual machine (VM) compute resiliency
Storage Quality of Service (QoS)
Storage Spaces Direct
Shielded VMs
Windows and Hyper-V containers
Nano Server and PowerShell Direct
There are also other improvements to existing features such as Shared VHDX, Hyper-V Replica, more online operations for VMs, a better Hyper-V manager console and more. Following is a summary of some of the key new features in TP5.

Backup and Checkpoints
Backups in Hyper-V can sometimes be a bit shaky, due to a reliance on the underlying Volume Shadow Copy Services (VSS) system. Windows Server 2016 instead makes change tracking a feature of Hyper-V itself, making it much easier for third-party backup vendors to support Hyper-V.

Snapshots and checkpoints are dangerous for production workloads. They have a convenient workflow: Take a snapshot, make some changes in the VM and, if those changes turn out badly, simply roll back to the snapshot.

The problem is that if it happens on a domain controller or database server that’s replicating with other servers, it’s now out of sync — and there’s no easy way to tell, nor any easy way to fix it. Microsoft made changes in 2012 for Active Directory DCs to make them safer, but this still doesn’t cover any other workloads in danger from a wrongly applied snapshot.

Production checkpoints in Windows Server 2016 (the classic checkpoints can still be used) use VSS inside the VM; when you apply them, the VM will assume it’s been restored from a backup and reboot, rather than be restored to a running state. This eliminates the danger while retaining the convenience of snapshots.

Rolling Cluster Upgrades
The upgrade story from Windows Server 2012 to Windows Server 2012 R2 was pretty good, enabling live migration of VMs from the old to the new. But you still had to stand up a separate Windows Server 2012 R2 cluster to start the process, which wasn’t ideal.

Going from Windows Server 2012 R2 to Windows Server 2016 is a lot easier: Simply evict one cluster node, format and install Windows Server 2016, and add it back into the cluster. It now acts as a Windows Server 2012 R2 host, so VMs can be live migrated to it; that means you can take another host and clean install it. Rinse and repeat as many times as required. When all nodes are upgraded and you’re sure you’re not going to add any down-level nodes, you use Windows PowerShell to upgrade the cluster functional level, similar to the way you do AD upgrades.

VMs have had version numbers internally since the first version of Hyper-V. Because of the rolling-cluster upgrade scenario, they’re now visible, so you need to be able to upgrade the configuration files for each VM. This is also done using PowerShell. Once upgraded, a VM can only run on Windows Server 2016 hosts. Each VM uses the new .vmcx file format for configuration and .vmrs for runtime state data; both are binary files and do not support direct editing (unlike the current XML file type).

VM Compute Resiliency
Clustering hosts together provides continuous VM availability for planned downtime; simply live migrate VMs from the host first, then perform the maintenance. For unplanned downtime, VMs on a failed host are automatically restarted on another host in the cluster, providing for high availability with a few minutes of downtime for the restart. So far, so good.

There are times, however, when host clusters can cause issues by themselves. A short network outage between the hosts can cause them to initiate a failover of many VMs, when, in fact, the network could right itself after a few seconds. Such a failover could cause more downtime, with numerous VMs restarting simultaneously.

In Windows Server 2016, if a host loses connectivity to the cluster, the VMs will keep running for four minutes (this can be changed) in “isolated mode.” If it’s longer, normal failover will occur. If a host has numerous disconnections over a 24-hour period, it will be quarantined and its VMs live migrated off as soon as possible.

Today, if a VM has an outage to the shared storage where the virtual disks are housed, it’ll crash if the outage is longer than about a minute. In Windows Server 2016, if storage connectivity is lost, the VM will be paused, pending reconnection to its virtual disks, avoiding the likely data loss in a crash.

You can now specify priority for VMs — high, medium and low — when failover occurs. TP5 allows admins to create sets of VMs, define dependencies between them, and let this dictate the order in which VMs are started.

Storage QoS
In the current version of Hyper-V, you can set a min or a max (or both) value for IOPS for virtual hard disks. This works fine as long as the back-end storage can actually deliver the combined IOPS requirement for all running VMs; if it can’t, there’s no way for the individual hosts to manage IOPS requirements.

Thanks To WZOR


01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16

Cloud Mail Links




Hooray no spam here!

You may also like...