Setting up HTTP/2 support on IIS server 2016 & Citrix Storefront

With the slow demise of HTTP, there is a new kid on the block HTTP/2, Which I have blogged about earlier from a Netscaler point of view

In the upcoming server release from Microsoft, IIS 2016 will be the first IIS release that supports HTTP/2, it is enabled by default from TP3 (All we need is a certificate to enable HTTP/2) So if I fire up a HTTP connection to a IIS server 2016 it will use regular HTTP, this can be seen using developer tools on Internet Explorer


Now if I setup support for HTTP/2 for older versions, this needs to be enabled from registry at the moment, using the following registry key.


Here we need to create a new DWORD value named DuoEnabled

Then set the value to 1


Then we need to add a certificate since HTTP/2 by default requires TLS in order to function, this can be done by for instance adding just a self-signed certificate to the web-site binding.
NOTE: This is not something that has be put as the standard, but just adopted by the different web-server vendors as well as browser vendors.


then restart the IIS service.

Now we can again to a connection to the IIS website and have developer tools open from IE and we can see that it is connecting using HTTP/2


Now I can also verify that this works flawlessly for Citrix Storefront as well


Just by moving to HTTP/2 looks like it has improved the performance very much. The login page went from 200 ms to about 40 – 50 ms load time. The general feel of the performance of the site looks much smoother.

NOTE: I have sent an email to Citrix to ask if this is supported or if there will be an upgrade in the future to support this properly.

NOTE: You can see more about the implementation of HTTP/2 on IIS on this GitHub page –>

Virtual Machine backup in Azure using Veeam Endpoint

A while back I blogged about Veeam Endpoint while it is aimed at Physical computers / servers it has another purpose that I just discovered.

In Azure, Microsoft has currently a preview feature called Azure VM backup, which in essence is a image based backup of virtual machines in Azure. Now since this currently has alot of limitations I figured what other options do we have?

While some people do Windows Server Backup directly to another Azure VM disk, I figured why not give Veeam a try with Data disk and use it in conjunction with SMB files. The reason why is that we can use Veeam Endpoint do to backup to an data disk (which is attached to an individual VM) then create a task to move the backup to an SMB files store (in case the virtual machines crashes or is unavailable we have the backup on an SMB file share and that makes it accessable for all other virtual machines within that storage account. NOTE: Doing Veeam backup directly to SMB file shares is not working

So we create a virtual machine in Azure and then use the portal to attach an empty data disk for the virtual machine


This new disk is going to be the repostiory for Veeam Endpoint within the VM

SMB files is a SMB like feature which is currently in preview and is available for each storage account. In order to use it we must first create a SMB file share using PowerShell

$FSContext=New-AzureStorageContext storageaccount storageaccountkey

$FS = New-AzureStorageShare sampleshare -Context $FSContext

New-AzureStorageDirectory -Share $FS -Path sampledir

After we have created the fileshare we need to add the network path to the virtual machine inside Azure. First we shold use CMDkey to add the username and password to the SMB file share to that it can reconnect after reboot

cmdkey /add: /user:useraccount /pass:<storage Access Key>

And then use net use z: \\\sampleshare


After the network drive is mapped up, we can install Veeam Endpoint.


Now Veeam Endpoint is a free backup solution, it can integrate with existing Veeam infrastructure such as repositories for more centralized backup solution. It also has some limitations regarding application-aware processing but works well with tradisional VMs in Azure.

After setup is complete we can setup our backup schedule




Then I run the backup job. Make sure that the backup job is run correnctly, not that as best-pratice is not to store any appliaction or such on C:\ drive, I also got VSS error messages while backing up data on c:\ therefore you should have another data disk where you store applications and files if neccessery.

Now after the backup is complete we have our backup files on a data disk that is attached to a virtual machine. We have two options here in case we need to restore data on another virtual machine.

1: We can run the restore wizard from the backup files on another virtual machine against the copied backup files on the SMB file share


2: Deattach and reattach the virtual disk to another virtual machine.
this is cumbersome to do if we have multiple virtual harddrives


Now the attaching a virtual disk is done on the fly, when we run the restore wizard from Veeam, the wizard will automatically detect the backup volume and give us the list of restore points available on the drive


Note that while running the file recovery wizard does not give us an option to restore back directly to the same volume, so we can only copy data out from a backup file.


Well there you have it, using Veeam endpoint protection for virtual machine in Azure against a data drive. After given it a couple of test runs I can tell its working as intended and gives alot better functionality over the built-in windows server backup. If you want to you can also set it up with Veeam FastSCP for Azure and allowing it to download files from Azure VMs down to an on-premises setup.

Nvidia GRID 2.0 at Vmware 2015

Among all the new updates announced at Vmware, Nvidia made one of their own. Nvidia announced that GRID 2.0 architecture is going to be released on September 15th.

This is a huge improvement and opens up for a lot of opportunities, this GRID 2.0 architecture is built upon the latest Maxwell GPU architecture, which can either be in two forms, one using tradisional form factor and the M6 which is aimed for blade servers.

For instance this means that we can deploy a Dell M630 (Which is the 13th Generation Dell blade servers) combines with the Tesla M6 cards. Also with the suppor for Linux on both Vmware Horizon and Citrix XenDesktop this will hopefully enable more use of GPU in Linux based workloads.

grid 2.0 2x


  • Doubled user density: NVIDIA GRID 2.0 doubles user density over the previous version, introduced last year, allowing up to 128 users per server. This enables enterprises to scale more cost effectively, expanding service to more employees at a lower cost per user.
  • Doubled application performance: Using the latest version of NVIDIA’s award-winning Maxwell™ GPU architecture, NVIDIA GRID 2.0 delivers twice the application performance as before — exceeding the performance of many native clients.
  • Blade server support: Enterprises can now run GRID-enabled virtual desktops on blade servers — not simply rack servers — from leading blade server providers.
  • Linux support: No longer limited to the Windows operating system, NVIDIA GRID 2.0 now enables enterprises in industries that depend on Linux applications and workflows to take advantage of graphics-accelerated virtualization.- See more at:
  • Now there is a little Gotcha here… That is GRID 2.0 requires a software licence from Nvidia, you can read more about it at Thomas Poppelgaard’s blog here –>

    Also important to remember that Citrix also announced Framehawk support using Citrix Netscaler a few weeks back, combine this with vGPU you get a really good desktop experience.

    Nutanix and Citrix–Better together

    Now Citrix has for a long time, support for most of the different hypervisors. Meaning that customers gets the flexibility to choose a number of different hypervisors if they are planning to use XenApp/XenDesktop. This support is also included for Netscaler as well.

    So as of today, Citrix supports XenServer, Hyper-V, Vmware, Amazon, Cloudplatform. As well as Azure support is on the way. Meanwhile a month back, Citrix announced a partnership with Nutanix, and stating that Acropolis Hypervisor was Citrix ready for XenApp/XenDesktop and Netscaler and Sharefile as well. This means that the customers will get a better integration between the hypervisor as well as support for the product on the Nutanix Hypervisor.

    Kees Baggerman from Nutanix posted this teaser on his website. Of how the integration might look like


    Now this is mostly focused on the Citrix Workspace Cloud, but also stated that this is coming for tradisional on-premises XenApp/XenDesktop as well


    Also looking forward to the deeper integration with for instance Machine Creation Service with for instance Shadow Clones on the Acropolis Hypervisor!

    Installing Unidesk with Azure integration

    I have previosly blogged about Unidesk layering technology earlier this year, then it was about Hyper-V support and how it operated. Then at Microsoft ignite this year I was introduced to a new version which enabled Azure Support.

    Then I was like WTF? You guys do that?? and yes they do, it was released not so long ago and this is my experience with how it works.

    First things we need to do is download the Azure version of Unidesk from their website –> then we need to have a azure account with an active subscription and a azure virtual network which either is connected locally  using a S2S VPN or a P2S VPN, this needs to be in place because the management appliance uses this VPN connection to communicate with Azure.


    Or! we can setup an virtual machine in azure which can act as a management host, where we can do the same procedure and run the installer wizard from. (I had some issues with uploading timing out)

    P2S is pretty easy to setup in Azure, just need to make our own signed certificate using makecert utils from VS. When that is done we start the installation of Unidesk! first we need a publish file from Azure (Which can be generated from here –>

    So when starting the installation, you point it to the publish file to allow it to get info about our subscriptions and such


    Then define a virtual network to place the management appliance, NOTE: That in a Azure subnet the first available IP-address is always 4. The setup also generates its own storage account to place the appilance.


    Then we play the waiting game…After the upload is compelte you should see the storage account appear where the VHD file was uploaded


    And also you should see the appliance starting up


    After it is uploaded and accessable you will need to login to the appliance and setup the Master Cachepoint appliance.

    Then go into System and create CachePoint



    Now we have to create an golden image. Choose RDSH session host from the gallery list


    Important! place it in the same cloud service and storage account as the management appliance


    Now after this has been deployed there are a couple of things we need to do.

    NOTE: The golden image cannot be part of a domain.

    1: Enable PowerShell remoting

    2: Apply all the latest updates

    3: Copy the unidesk tools to the golden image under C:windows\setup\scripts

    4: Run the uattended installation wizard by using the unattend.exe file in the scripts folder


    Then run the Optimization feature


    And lastly run the tools setup. Here we need to enter information about the management appliance IP and so on.


    NOTE: Might be that you need to restart your golden image before the installation is successfull.

    And after the installation is done we can go ahead and create a golden image OS layer based upon the template we just created


    So this has been part 1 of Unidesk & Azure.

    Implenenting Containers on Windows Server 2016 and running IIS

    So since TP3 was released yesterday, I have been quite busy trying to implement Containers on top of a Hyper-V host. Now Microsoft has been kind as enough to give us a simple Contain Image which makes the first part pretty easy.

    In order to deploy Container we need a container host. The easiest way to get startet is download a finished script from Microsoft, which we can run directly from a Hyper-V host to be able to get a container host VM

    NOTE: That Containers do not require Hyper-V, but this

    wget -uri -OutFile New-ContainerHost.ps1

    This will generate a PowerShell Script from the URL, when we run it we need to define a couple of things, first of is name of the VM and password for the built-in administrator account and doing so the script which in essence will do a couple of things.

    1: Download a finished Sysprepped Container Host image from which is in essence

    2: Enables the Container feature on the host-vm  (Part of the unattend process) is in the last part of the script contains a unattend section which is being process against the container host-vm

    3: Boot the VM as a Contained-host and do PowerShell direct session after the VM is booted and finish the setup.

    After that you have a running container host setup, and we can connect to the VM using Hyper-V manager


    Not much to see yet. Important to remember that the image will create a built-in NAT switch on the Docker host, with a predefined subnet range


    Where the docker host will take the first IP in the range. Now if we run Get-ContainerHost and Get-ContainerImage we should get that the VM is a Containerhost and that we have a WindowsServerCore Image available.

    Now in order to create a Container we need to run the following command

    $container = New-Container -Name «MyContainer» -ContainerImageName WindowsServerCore -SwitchName «Virtual Switch»

    The name of the switch needs to be identical to the one added. Can be viewed using get-vmswitch

    Reason why we store it in a variable is because we need to reference it later when using PowerShell direct.

    I can use the command get-container to see that it has been created. Now I have to start the container using start-container –name “MyContainer”

    I can now see that the container is running and is attached to the NAT vSwitch


    Great! so what now ? Smilefjes

    As I mentioned earlier we needed to store the container variable in order to use it later, well this is the time. Now we need to do a PowerShell direct session to the Container. If not we can always use the $container = get-container –name to store it against.

    By using the command

    Enter-PSSession -ContainerId $container.ContainerId –RunAsAdministrator

    We can now enter a remote session against the Container. We can also see that the container ID is shown at the start of the prompt


    Also verify that is has gotten an IP-address from the NAT Network


    So now what ? Let’s start by installing IIS on the container, this can be done by using the command Install-windowsfeature –name Web-Server

    After that is installed and that the W3 service is running

    get-service –name W3SVC


    Now that we have deployed an IIS service on the Container, we need to setup a Static NAT rule to open for port 80. In my case I have a lab which resides on but the NAT switch is on

    NOTE: Another option we can do is to enable the builtin-administrator account so that way we can use RDP against the Container in the future (Make sure you add the proper NAT rules)

    net user administrator /active:yes

    So in order to add a static forwarding rule on the containerhost vm just use the command to specify ports and IP-addreses. Add-NetNatStaticMapping -NatName «ContainerNat» -Protocol TCP -ExternalIPAddress -InternalIPAddress -InternalPort 80 -ExternalPort 80

    Next I just do a nasty firewall disable edit

    set-netfirewallprofile domain,public,private –Enabled false

    Then by running Get-NatStaticMapping on the ContainerHost I can see the rules I created. I also added som new rules for RDP purposes.


    Now my Docker host, is setup with two IP addresses (One which is and the other is (Which when I connect to that IP the NAT rules will kick in and forward me to my IIS service running on the Container)

    Now I can see that I have a NAT session active


    And that IIS opens on the Container


    Now that I have an IIS installed Container I can shutdown the VM and create a new containerimage of it.

    stop-container –name “test2”

    By using the command

    $newimage = New-ContainerImage -ContainerName test2 -Publisher Demo -Name newimage -Version 1.0

    So this has been a first introduction to Containers running on TP3. Note that many utilities do not work formally with Containers, such as sconfig which tries to list out network interfaces, but they are not presented within a Container so some settings are not available.

    Getting started with Docker Containers on Windows Server 2016 Technical Preview 3

    So TP3 was released earlier today (about 1 hour ago), as an image on Azure, and I have been able to spend quite alot of useful minutes on it and more specificaly on Containers. TP3 is the first release that supports Native containers.

    Now Containers can be added to TP3 as a feature, by running the command

    Install-WindowsFeature –name Containers

    Now by default there isn’t so much that we can do, unless we have some proper images in place. Luckily I have noticed that Microsoft has a GitHub site where it places all different examples uses for showing Containers.

    Which can be found here –>

    From here we also have a sample-script which allow us to setup a new container host with a sample image. The install-containerhost will in essence setup a Windows Server 2016 container host on top of Hyper-V

    It will download a Container image from this Image is about 6 GB large so it might take some time before it is finished downloaded.

    We also have an example script to deploy an Container with MineCraft under the same GitHub

    which was updated less then 15 minutes ago Smilefjes