What is Microsoft doing with RDS and GPU in 2016? and what are VMware and Citrix doing?

So it was initially labed Server 2016, for then I forgot an important part of it, which ill come back to later.

This year, Microsoft is most likely releasing Windows Server 2016 and with it a huge number of new features like Containers, Nano, SDN and so on.

But what about RDS? Well Microsoft is actually doing a bunch there,

  • RemoteFX vGPU support for GEN2 virtual machines
  • RemoteFX vGPU support for RDS server
  • RemoteFX vGPU with OpenGL support
  • Persional Session Desktops (Allows for an RSDH host per user)
  • AVC 444 mode (http://bit.ly/1SCRnIL)
  • Enhancements to RDP 10 protocol (Less bandwidth consuming)
  • Clientless experience (HTML 5 support is now in tech preview for Azure RemoteApp) which will also most likely be ported for on-premises solutions as well)
  • Discrete Device Assigment (Which in essence will be GPU-passtrough) http://bit.ly/1SULnLD

So there is all these stuff happening in terms of GPU enhancements and performance increase of the protocol and of course delivering hardware offloading uses the encoder.

Another important piece is the support for Azure which is coming with the N-series, which is DDA (GPU-passtrough) in Azure which will allow us to setup a virtual machine with dedicated GPU graphics running for a per hour price when we need it! and also in some cases can be configured for an RDMA backbone where we have need for high compute capacity for deep-learning. This N-series will be powered by NVDIA and K80 & M60.

So is still RDS the way to go in terms of full-scale deployment ? Can be, RDS has gotten from a dark place to become a good enough solution (even thou it has its limitations) and the protocol itself has gotten alot better (even do I miss alot of tuning capabilities for the protocol itself..

Now VMware and Citrix are also doing their things, with a lot of heavy-hitting being done at both sides, but also this again gives ut alot of new feature since both companies are investing alot in their EUC stack.

The interesting part is that Citrix is not putting all their eggs in the same basket, with now adding support for Azure as well (Which already includes support for ESXi, Amazon, Hyper-V and so on), meaning that when Microsoft releases the N-series as well, Citrix can easily integrate to the N-series to deliver the GPU using their own stack which has alot of advantages over RDS. Horizon with GPU usage is limited to running on ESXi.

VMware on the other hand is focusing on a deep partnership with Nvidia and also moving ahead with Horizon Air Hybrid (which will be a kinda Citrix Workspace Cloud setup) and also VMware is doing ALOT on their Stack

  • AppVolumes
  • JIT desktops
  • User Enviroment Manager

Now 2016 is going to be an interesting year to see how these companies are going to evolve and how they are going to drive the partners moving forward.

#azure, #citrix, #hyper-v, #microsoft, #nvidia, #vmware

Advanced backup options for Hyper-V 2012 R2 on Veeam

Some questions that come up again and again are advanced backup features in Hyper-V using Veeam. How does Veeam take backup from a Hyper-V host?

in a simple day to day virtual machine life, the read & writes consist of I/O traffic from a virtual machines to a VHD/VHDX. Residing on a network share. SAN/SMB and such.


When we setup Veeam to take backup of a virtual machine, what will happen is the following. First thing is that Veeam will trigger a snapshot using the Hyper-V Integration Services Shadow Copy Provider on that particular Hyper-V host that the virtual machine resides on. What will happen is that a AVHDX. This can either be done using an hardware VSS provier or software VSS provider.


A hardware provider manages shadow copies at the hardware level by working in conjunction with a hardware storage adapter or controller. A software provider manages shadow copies by intercepting I/O requests at the software level between the file system and the volume manager. The number of VMs in a group is limite depending on VSS provider. For a software VSS provider — 4 VMs, for a hardware VSS provider — 8 VMs.

NOTE: Using Offhost-proxy, requires an storage solution which supports an hardware transferrable shadow copies against a SAN. If we for instance use SMB based storage for Hyper-V we do not require this –> http://helpcenter.veeam.com/backup/hyperv/smb_off-host_backup.html

Using onhost backup, means that the transport role will be using on a Hyper-V host which has access to the running virtual machines.

Make sure that the integration services and running and up to date before doing online backup, you can check this from Hyper-V PowerShell –> Get-VM | FT Name, IntegrationServicesVersion
More troubleshooting on interation services here –> https://www.veeam.com/kb1855

So what will happen in a online backup is (If all the requirements are meet)

1: Veeam will interact with the Hyper-V host VSS service and request backup of the specific VM

2: The VSS writer on the hyper-v host will then forward the reques tto the Hyper-V Integration components inside the VM guest OS

3: The integration components will then communicate with the VSS framework inside the guest OS and request backup of all VSS-aware application inside the VM

4: The VSS writers of application aware VSS will then get application data suiteable for backup

5: After the applications are quiesced the VSS inside the Virtual machine takes an internal snapshot using the software based VSS

6: The integration service component notifices the Hypervisor that the VM is ready for backup, and Hyper-V will then take a snapshot of the volume which the Virtual machine is located on. Then a AVHDX file will be generated, all WRITES will be redirected there.

7: The volume snapshot is presented to Veeam either using Off-host or on-host backup. (If the Off-host proxy is not available it will fallback to on-host proxy on a designeted host)

8: Data will then be processed on the proxy server and then be moved to the repository


NOTE: Off-host setup requires an dedicated Hyper-V host (It requires Hyper-V to have access to the VSS providers) and in case of using Off-host it cannot be part of the Hyper-V cluster, and make sure it has READ only access to the LUN and that your storage vendors supports readable shadow volume copies.

On-host backup will use the Veeam transport service on the Hyper-V machine. If the volume is placed on a CSV volume, the CSV Software Shadow Copy Provider will be used for the snapshot creation process.

NOTE: During the backup process, Veeam will try to use its own CBT driver on Hyper-V host to make sure that it only takes backup of only the changed blocks. (Since Hyper-V does not natively provide CBT, this will change in Windows Server 2016)

NOTE: If CBT is not working on Veeam run the command Reset-HvVmChangeTracking PowerShell cmdlet http://helpcenter.veeam.com/backup/80/powershell/reset-hvvmchangetracking.html, or if the virtual machines are being shut down during backup process, try to disable ODX)

If Change block tracking is not enabled or not working as it should, the backup proxy will copy the virtual machine and use Veeam’s proprietary filtering mechanism. so Instead of tracking changed blocks of data, Veeam Backup & Replication filters out unchanged data blocks. During backup, Veeam Backup & Replication consolidates virtual disk content, scans through the VM image and calculates a checksum for every data block. Checksums are stored as metadata to backup files next to VM data.

So what about the more advanced features for Hyper-V

Hyper-V Settings

  • Enable Hyper-V guest quiescene

In case of application aware, The VM OS is suspsended and the content of the system memory and CPU is written to a dump file, in order to be able to perserve the data integrity of files with for instance transactional applications (This is known as offline backup)

Note that using this feature Veeam will not be able to perform application tasks like

    • Applying application-specific settings to prepare applications for VSS-aware restore at the next VM startup
    • Truncating transaction logs after successful backup or replication.
  • Take Crach consistent backup instead of suspending VM

If you do not want to suspend the virtual machine during backup, you can use crach consistent backup instead of suspending the virtual machine. This is equal to a hard reset of a virtual machine, this does not involve any downtime to a virtual machine but it does not preserve the data integrity of open files and may result in data loss.

  • Use changed block tracking data

Use the Veeam filter driver to look at changed blocks before data is copied to the offhost-veeam proxy or on-host proxy to the repository

  • Allow Processing of multiple VMs with a single volume snapshot

If you have multiple virtual machines within the same job, this feature will help reduce the load on the Hyper-V hosts.As this will trigger a volume snapshot for mulitple machines instead of a single virtual machine.

NOTE: The virtual machines much be located on the same host and must reside on a file share which uses the same VSS provider.

This is the first post of series – Veeam post and Hyper-V processing.

#backup, #hyper-v, #veeam

Hyper-V and Storage features deep-dive comparison with Nutanix

So another blogpost in this storage series with Hyper-V, in the previous posts I discussed a bit about what features Hyper-V has and the issues with them. Well time to take that to the next level. Just to show how Nutanix solves the performance issues and how Microsoft does it with their Windows Server features.

First of we have the native capabilities with Windows Server and Storage Spaces. We can benefit from SMB 3 and for instance mutlichannel with RSS and Jumbo frames which allows for much less overhead in a TCP network, of course it requires some knowledge on congestion algoritms to use as well to be able to use the full troughput

We can also use tiering in the back-end with the default write-back cache feature (which by default is on 1 GB) and during night the tiering feature run an optimization task that moves the hot data to the SSD tier and the cold data to the HDD tier.

On the other hand we can have a RDMA deplouyment which in essence removes the TCP/IP stack completly and does zero-copy network capabilities, and we can use this in conjunction with CSV cache which only provides benefits for read-only unbuffered I/Os in RAM on the host, this feature can be enabled on a CSV disk level and is integrated into failover cluster manager and is leveraged on all the hosts in a cluster. but… this feature is disabled for a tiered stoarge space CSV therefore they can not be both activated on the same deployment.


In the Nutanix I/O Path things are a bit different, since the CVM (Controller VM) serves content locally from the node to the hyper-V host using SMB using disk passtrough locally.


The I/O fabric in a Nutanix node consists of many different logical stores. First of we have the Content Cache which is an deduplicated read cache which consists of both memory and SSD. Which is serverd from the memory of the CVM. Here we have the ability to leverage from inline deduplication.

Then we have the OpLog which is built to handle random I/O, when dealing with bursts of random I/O it coalesce them and then sequentially drains it to the other Store (Extent Store) The oplog is on the SSD tier. In case of sequencial Write I/O  the Oplog is bypassed and is then writen directly to the Extent Store.  The Oplog is also replicated to one or more nodes in a cluster to handle high-availabilty.

The Extent Store serves as persistent data storage in a Nutanix node, which consists of SSD and HDD. Data coming into the extent store is either directly as sequential write I/O or drained from the Oplog. The Extent store can also leverage from deduplication, this is a cross cluster deduplication feature, meaning that all nodes participate. 

So as we can see Nutanix leverages tiering, deduplication, in-memory caching while maintaining availability for data across nodes in a cluster, and combining this with data locality to deliver the lowest form of latency.

#hyper-v, #nutanix

How Nutanix works with Hyper-V and SMB 3.0

In my previous blog post I discussed a bit about software defined options using Hyper-V https://msandbu.wordpress.com/2015/07/28/software-defined-storage-options-for-hyper-v/ and that Windows Server is getting alot of good built-in capabilities but lacks the proper scale out solution with performance, which is also something that is coming with Windows Server 2016.

Now one of the vendors which I talked about which has a proper scale-out SDS solution for Hyper-V with support for SMB 3 is Nutanix, which is the subject for this blogpost where I will describe how it works for SMB based storage, now before I head on over to that I want to talk a little bit about how SMB 3 and some of the native capabilities and why they do not work for a proper HCI setup.

With SMB 3.0 Microsoft Introduced two great new features, which was SMB Direct and Multichannel, which are features that are aimed for higher troughput over lower latency.

SMB Multichannel (leverages multiple TCP connections across multiple CPU cores using RSS)

SMB Direct (allowing for RDMA based network transfer, which does bypasses the TCP stack and moving data from memory to memory which gives low overhead, low latency connections.

Now both these features allow us to leverage better NIC utilization, but is aimed for a traditional configuration where storage is still a seperate resource from computing. My guess is that when we are going to deploy a Storage Spaces Direct cluster on Windows Server 2016 in a HCI deployment these features will be disabled.

So how does Nutanix work with SMB 3 ?


First of, important to understand the underlaying structure of the Nutanix OS. First of all local storage in the Nutanix nodes from a cluster are added to a unified pool of storage which are part of the Nutanix distributed filesystem. On top of this we create containers which have their settings like compression, dedup and replication factor which defines the amount of copies of data within a container. The reason for these copies are for fault-tolerance in case of a node failure or disk failure. So in essence you can think about this is a DAG (Database availability Groups) but for virtual machines.

So for SMB we can have shares which are represented as containers which again are created on top of a Nutanix cluster.  Which are then presented to the Hyper-V hosts for VM placement.

Also important to remember that even thou we have a distributed file system across different nodes, the data is always run locally for a node (reason for this is so that the network does not becoming a point of congestion) Nutanix has a special role called the Curator (Which runs on the CVM)which is responsible for moving the hot data as local to the VM as possible. So if we for instance do a migration from host 1 to host 2, the CVM on host 1 might still contain the VM data and then reads and writes will from host 2 to CVM on host 1 the CVM will start to cache the data locally.

Now since this architecture leverages data locallity there is no need for feature like SMB Direct and SMB multichannel so therefore these features are not required in a Nutanix deployment for Hyper-V, however is does support SMB transparent failover which allows for continuously available file shares.

Now I haven’t started to explain yet how this architecture handles I/O yet, this is where the magic happens. Stay tuned.

#hyper-v, #nutanix, #smb-3-0

Trouble with Hyper-V, Virtual Machine manager and XenClient

So in my Hyper-V enviroment all of hosts are administered by Virtual Machine Manager. The other day I needed to deploy Citrix XenClient to a hyper-v host (Since its the only hypervisor that is supported for the syncronizer part)

Now by default when installing XenClient it sets up the TomCat service running on port 443. After the XenClient installation was complete and I didn’t think much about it for the next week or so.


After that I needed to deploy a new virtual machine from a template to the same host, and then I started getting some strange error messages on the job status in VMM

“A Hardware Management error has occured trying to contact server”


Now I could either change the ports used for BITS in VMM by following the instructions here –> http://support.microsoft.com/kb/2405062 or I could change the ports of the TomCat engine by following the setup her –> http://support.citrix.com/article/CTX134691

So in my case I changed VMM To use different ports for BITS (Since I have other products that might run on 443 on a Hyper-V server.


After I changed the port, VM deployment worked as it should again!

#hyper-v, #virtual-machine-manager, #xenclient

Veeam Management pack for Hyper-V and Vmware walktrough

Yesterday, Veeam released their new management pack which for the first time includes support for both Vmware and Hyper-V. Now I have gotten a lot of questions regarding (Why have Hyper-V monitoring if Microsoft has it ?) well Veeam’s pack has alot more features included, such as capacity planning, heat maps and so on.

The management pack can be downloaded as an free trial from veeam’s website here –> http://www.veeam.com/system-center-management-pack-vmware-hyperv.html

Now as for the architecture of the functionality here it’s quite simple


First of there are two components.

* Veeam Virtualization Extesions (Service and UI) it manages connections to VMware systems and the Veeam Collector(s), controling licensing, load balancing, and high availability

* Veeam Collector component gathers data from VMware and injects its information into the Ops Agent.

It is possible to install all of these components on the management server itself. You can also install the collector service on other servers which have the Opsmgr agent installed. The virtualization extension service must be installed on the management server.

In my case I wanted to install this on the mangement server itself, since I have a small enviroment. Before I started the installation I needed to make sure that the management server was operating in proxy mode.


Next I started the installation on the management server. Now as with all of Veeams setup it can automatically configure all prerequisites and is pretty straight forward. (Note it will automatically import all required management packs into SCOM1

If you have a large enviroment it is recommended to split ut collectors into different hosts and create a resource pool (There is an online calculator which can help you find out how many collectors you need) http://www.veeam.com/support/mp_deployment.html

You can also define if collector roles should be automatically deployed


After the installation is complete (using the default ports) you will find the extensions shortcut on the desktop


By default this opens a website on the localhost (using port 4430) from here we need to enter the connection information to Vmware (Hyper-V hosts are discovered automatically when they have the agent installed) Same with Veeam Backup servers as well.


After you have entered the connection info you will also get a header saying the recommended number of collector hosts.


After this is finished setup you can open the OpsMgr console. From here there is one final task that is needed. Which is to Configure the Health Service, this can be dome from tasks under _All_active_Alerts under VMware monitoring pane.


After this is done you need to expect atleast 15 min before data is populated into your OpsMgr servers, depending on the load. You can also view the events logs on the Opsmgr servers to see that data is correctly imported.


and after a while, voila!

I can for instance view info about storage usage



Vm information


Now I could show grafs and statistics all day but one of the cool stuff in this release, is the cloud capacity planning reports.


They allow it to see for instance how many virtual machines I would need in Azure (and what type) to move them there.


#hyper-v, #operations-manager, #system-center, #veeam, #vmware

Microsoft Virtual Machine Converter 2.0

So this is such a great update I have to blog about it, I have been in many projects involving migrating from VMware to Hyper-V and there of course many options to choose from there. Alas Microsoft had its own Virtual Machine Converter but didn’t have support for the latest version.

Microsoft today released a new version of Virtual MAchine Converter which contains the following updates:

With the release today, you will be able to access many updated features including:

  • Added support for vCenter & ESX(i) 5.5
  • VMware virtual hardware version 4 – 10 support
  • Linux Guest OS migration support including CentOS, Debian, Oracle, Red Hat Enterprise, SuSE enterprise and Ubuntu.

We have also added two great new features:

  • On-Premises VM to Azure VM conversion: You can now migrate your VMware virtual machines straight to Azure. Ease your migration process and take advantage of Microsoft’s cloud infrastructure with a simple wizard driven experience.
  • PowerShell interface for scripting and automation support: Automate your migration via workflow tools including System Center Orchestrator and more. Hook MVMC 2.0 into greater processes including candidate identification and migration activities.


So alot of great new features which should make it even easier to convert Virtual Machines. Also another important factor here is this.

At this time, we are also announcing the expected availability of MVMC 3.0 in fall of 2014. In that release we will be providing physical to virtual (P2V) machine conversion for supported versions of Windows.

Since Microsoft removed this option from SCVMM in R2 release its great that it is coming back. You can download the tool from here –> http://www.microsoft.com/en-us/download/details.aspx?id=42497

#hyper-v, #scvmm, #system-center, #virtual-machine-converter, #vmware