Kategoriarkiv: Uncategorized

Application virtualization vs Application layering

So this is a blogpost which is mostly about the session I had on NIC this year, where I held this presentation. Where I talked about different technologies from the app-virt and app-layering landscape and discussed the benefits / cons using these types of products. In these days alot of buisnesses are virtualizing their applications. In some cases it makes sense, but on the other hand there is a new technology appearing in the landscape which is application layering so this post is about showing the difference. However since this is a pretty long subject im not gonna cover everything in the same post Smilefjes and no its not a VS battle…

So where are we today? We have our VM-template which are used to deploy virtual machines which can use PXE or some built-in hypervisor deployment tool like vCenter or SCVMM.


We use that to deploy virtual machines and if we need to update the VM-template we have to start it and deploy patches to it, simple. However we need other tools like System Center or WSUS to have the other machine up to date because there is no link between the VM-template and the machines that we have provisioned. Another thing is application installation, where we for many year have been using Group Policy/scripts/deployment tools/system center to install application to the virtual machines that we deploy. Or we could pre-install these applications onto the VM-template (Golden Image) and save us some trouble. Now installing multiple applications onto a machine also needs to need to have ways to update the applications. This is typically done using an MSI update or using System Center and replace the existing software with a new one. Now installing all these applications onto a machine requires that it writes some registry entries, files to the drive and maybe even some extra components which the software is dependant on.

Now we have been doing this for years so what are the issues??

  • Big golden image (By preinstalling many applications on the golden image we might have longer deployment time, application we don’t need, slowing down the VM-template)
  • Patch management (How are we going to manage patching application across 200 – 500 virtual machines?)
  • Application Compability ( Some application might require different versions of the Visual C# library for instance which are non-compatible)
  • Applicaiton Security (Some applications do wierd shit, what can we do about those?
  • Application lifecycle management (How can we easily add and replace existing applications, we might also need different versions of the same application)
  • Software rot, registry bloating (You know there is a reason why there are registry cleaners, right?)

So what about Application virtualization?


Using application virtualization, applicated are isolated within their own “bubble” which includes virtual filesystem and registry and other required components like dll libraries and such. Since each application is isolated their are not allowed to communicate with each other. In some cases we can define so that the application can read/write to the underlaying machine. We also have flexible delivery methods, either using cached mode where the package is stored on a local machine or using streaming mechanisms. In these cases we avoid

  • Bloated filesystem / registry
  • Application conflicts
  • Added application security
  • No application innstalled on the underlying OS
  • Multiple runtime enviroment
  • Easier to do app customization
  • Easier update management

Now there are two products I focused on in the presentation which are ThinApp and App-V5.


Now App-V requires an agent installed on each host (Has some limitations about the supports OS) but is flexible in terms or using caching or streaming (called Shared Content Store)

And we can either manage it using App-V infrastructure or using standalone using PowerShell cmdlets. We can also integrate it with System Center and even Citrix.
However in many cases we need to have older versions of Internet Explorer running when we for instance are upgrading to another Operating System. However App-V does not support this. We have also seen that adding App-V puts extra I/O traffic on the host compared to other app-virt solutions.


Vmware ThinApp

Vmware ThinApp is a bit different, it does not have any infrastructure. You basically have the Captuing agent which you use to create an app-virt package and then the package is self-contained. When you run an application the ThinApp agent is actually running beneath the package that we captured. ThinApp can be created as an MSI or EXE file which allows for easily deployment using existing tools like System Center or other deployment software. Most of the logic of a ThinApp package is stored within the package.ini file.

However if we want some form of management we need horizon setup, and there is no PowerShell support, in order to use that we need to develop some cmdlets using the SDK. Since there is no infrastructure as well we don’t have ny usage tracking feature. However we have a handy update feature using Appsync which is part of each package ini file.


Now both of these solution use a sequencing VM which is used to do the package/capturing process. ThinApp however has a larger amount of supported operating systems it supports.

Now what about application layering ? Important to remember that AppVirt runs in user-space, and therefore there are some restrictions to what it can run (Antivirus, VPN, Boot stuff, Kernel Stuff, Drivers and so on)

Application layering is a bit different, it basically uses a filter driver to merge different virtual disks which will then make up a virtual machine.


So we have the windows operating system which might be its own layer (master image) and then we have different layers which run on top, which might consist of application layers, which are read only  (VHD disks for instance) and might also consist of a Personalization layer which might be read and write layer.

Now using application layering, application will behave normally like as they would when they are innstalled to an operating system, since its pretty much a merger of different VHD disks.

Since they aren’t isolated, the “capturing process” is much more simple. Unlike when doing app-virt where the sequcing part might take a loooong time. We can then add different applications to different layers and we can for instance distribute the read/write across different layers where are stored different places. This allows for simpler application lifecycle management for instance if we have multiple virtual machines using the same application layer, we can just update the main application layer and the virtual machines will get the new application.

In the application layer space there are three products/vendors I focused on.

  • Unidesk
  • Vmware AppVolumes
  • Citrix AppDisks

NOTE: There are multiple vendors in this space as well.


Now Unidesk is the clear leader in this space, since they have support for multiple hypervisors and even Azure! They can also do OS-layering as part of the setup.


They can layer pretty much everything since they are integrated into the hypervisor stack. So its not entirely correct to state that they are an application layering vendor. They are a layering vendor period Smilefjes
NOTE: The only thing I found out they cannot layer is cake.. Plus they have an Silverlight based console + they don’t have instant app access like some of the other do. But there is a new version around the corner.

Citrix AppDisks

Then we have Citrix AppDisks which is going to be released in the upcoming version 7.8. Now the good thing with Appdisks is that it is integrated into the Citrix Studio Console. AppDisks is Applications only, Citrix has other solutions for the OS-layer (MCS or PVS) which both AppDisks will support. They also have PVD for the profile which can be writeable, and they have profile manager as well which will make Citrix a good all-round solution.


Now AppDisks support as of now XenServer and Vmware and suprise! you need an existing Citrix infrastructure. So no RDS/View. AppDisks also has no instant-app delivery, and is only for virtual machines.

Vmware AppVolumes

Now the last piece of the puzzle is AppVolumes from Vmware. which is agent based and runs on top of the operating system. The good thing about this is that they have instant application delivery, they can also function on a physical devices since they are agent based. However you should be aware of the requirements for using AppVolumes on a physcal device (they should be non-persistent and have constant network access, I smell Mirage)


Its has a simple HTML5 based management, it only does hypervisor integration with ESX and vCenter however but can be used in RDS / Citrix enviroments, just install the agents and do some management and you are good to go.

Now we have seen some of the different technologies out there, to summurize I would state the following.

Application virtulization should be used for the following:

  • Application Isolation
  • Mutliple runtime versions
  • Streaming to non-persistent
  • Application compability

Application layering should be used for the following:

  • Application lifecycle management
  • Image management
  • Profile management (if supported by vendors)
  • Applications which require drivers / boot stuff

Now you can also view the slidedeck of the presentation here –>

Videos for the different vendors can be found here –>

And lastly, what is coming in the future? Most likely this will be container based applications, which have their own networking stack, more security requirements and contained within their own kernel space. Here we have providers such as Turbo.net which have been delivering container based applications before Windows Announced container support for their operating system

Office365 on Terminal server done right

So this is a blogpost based upon a session I had at NIC conference, where I spoke about how to optimize the delivery of Office365 in a VDI/RSDH enviroment.

There are multiple stuff we need to think / worry about. Might seem a bit negative, but that is not the idea just being realistic Smilefjes

So this blogpost will cover the following subjects

  • Federation and sync
  • Installing and managing updates
  • Optimizing Office ProPlus for VDI/RDS
  • Office ProPlus optimal delivery
  • Shared Computer Support
  • Skype for Buisness
  • Outlook
  • OneDrive
  • Troubleshooting and general tips for tuning
  • Remote display protocols and when to use when.

So what is the main issue with using Terminal Servers and Office365? The Distance….

This is the headline for a blogpost on Citrix blogs about XenApp best pratices


So how to fix this when we have our clients on one side, the infrastructure in another and the Office365 in a different region ? Seperated with long miles and still try to deliver the best experience for the end-user, so In some case we need to compromise to be able to deliver the best user experience. Because that should be our end goal Deliver the best user experience


User Access

First of is, do we need to have federation or just plain password sync in place? Using password sync is easy and simple to setup and does not require any extra infrastructure. We can also configure it to use Password hash sync which will allow Azure AD to do the authentication process. Problem with doing this is that we lose a lot of stuff which we might use on an on-premises solution

  • Audit policies
  • Existing MFA (If we use Azure AD as authentication point we need to use Azure MFA)
  • Delegated Access via Intune
  • Lockdown and password changes (Since we need change to be synced to Azure AD before the user changes will be taken into effect)

NOTE: Now since I am above average interested in Netscaler I wanted to include another sentence here, for those that don’t know is that Netscaler with AAA can in essence replace ADFS since Netscaler now supports SAML iDP. Some important issues to note is that Netscaler does not support • Single Logout profile; • Identity Provider Discovery profile from the SAML profiles. We can also use Netscaler Unified Gateway with SSO to Office365 with SAML. The setup guide can be found here


NOTE: We can also use Vmware Identity manager as an replacement to deliver SSO.

Using ADFS gives alot of advantages that password hash does not.

  • True SSO (While password hash gives Same Sign-on)
  • If we have Audit policies in place
  • Disabled users get locked out immidietly instead of 3 hours wait time until the Azure AD connect syng engine starts replicating, and 5 minutes for password changes.
  • If we have on-premises two-factor authentication we can most likely integrate it with ADFS but not if we have only password hash sync
  • Other security policies, like time of the day restrictions and so on.
  • Some licensing stuff requires federation

So to sum it up, please use federation

Initial Office configuration setup

Secondly, using the Office suite from Office365 uses something called Click-to-run, which is kinda an app-v wrapped Office package from Microsoft, which allows for easy updates from Microsoft directly instead of dabbling with the MSI installer.

In order to customize this installer we need to use the Office deployment toolkit which basically allows us to customize the deployment using an XML file.

The deployment tool has three switches that we can use.

setup.exe /download configuration.xml

setup.exe /configure configuration.xml

setup.exe /packager configuration.xml

NOTE: Using the /packager creates an App-V package of Office365 Click-To-run and requires a clean VM like we do when doing sequencing on App-V, which can then be distributed using existing App-V infrastructure or using other tools. But remember to enable scripting on the App-V client and do not alter the package using sequencing tool it is not supported.

The download part downloads Office based upon the configuration file here we can specify bit editions, versions number, office applications to be included and update path and so on. The Configuration XML file looks like this.


<Add OfficeClientEdition=»64″ Branch=»Current»>

<Product ID=»O365ProPlusRetail»>

<Language ID=»en-us»/>



<Updates Enabled=»TRUE» Branch=»Business» UpdatePath=»\\server1\office365″ TargetVersion=»16.0.6366.2036″/>

<Display Level=»None» AcceptEULA=»TRUE»/>


Now if you are like me and don’t remember all the different XML parameters you can use this site to customize your own XML file –> http://officedev.github.io/Office-IT-Pro-Deployment-Scripts/XmlEditor.html

When you are done configuring the XML file you can choose the export button to have the XML file downloaded.

If we have specified a specific Office version as part of the configuration.xml it will be downloaded to a seperate folder and storaged locally when we run the command setup.exe /download configuration.xml

NOTE: The different build numbers are available here –> http://support2.microsoft.com/gp/office-2013-365-update?

When we are done with the download of the click-to-run installer. We can change the configuration file to reflect the path of the office download

<Configuration> <Add SourcePath=»\\share\office» OfficeClientEdition=»32″ Branch=»Business»>

When we do the setup.exe /configure configuration.xml path

Deployment of Office

The main deployment is done using the setup.exe /configure configuration.xml file on the RSDH host. After the installation is complete

Shared Computer Support

<Display Level="None" AcceptEULA="True" /> 
<Property Name="SharedComputerLicensing" Value="1" />

In the configuration file we need to remember to enable SharedComputerSupport licensing or else we get this error message.


If you forgot you can also enable is using this registry key (just store it as an .reg file)

Windows Registry Editor Version 5.00

«InstallationPath»=»C:\\Program Files\\Microsoft Office 15»

Now we are actually done with the golden image setup, don’t start the application yet if you want to use it for an image. Also make sure that there are no licenses installed on the host, which can be done using this tool.

cd ‘C:\Program Files (x86)\Microsoft Office\Office15’
cscript.exe .\OSPP.VBS /dstatus


This should be blank!

Another issue with this is that when a user starts an office app for the first time he/she needs to authenticate once, then a token will be stored locally on the %localappdata%\Microsoft\Office\15.0\Licensing folder, and will expire within a couple of days if the user is not active on the terminalserver. Think about it, if we have a large farm with many servers that might be the case and if a user is redirected to another server he/she will need to authenticate again. If the user is going against one server, the token will automatically refresh.
NOTE: This requires Internet access to work.

And important to remember that the Shared Computer support token is bound to the machine, so we cannot roam that token around computers or using any profile management tool.

But a nice thing is that if we have ADFS setup, we can setup Office365 to automatically activate against Office365, this is enabled by default. So no pesky logon screens.

Just need to add the ADFS domain site to trusted sites on Internet Explorer and define this settings as well

Automatic logon only in Intranet Zone


Which allows us to basically resolve the token issue with Shared Computer Support Smilefjes

Optimizing Skype for Buisness

So in regards to Skype for Buisness what options do we have in order to deliver a good user experience for it ? We have four options that I want to explore upon.

  • VDI plugin
  • Native RDP with UDP
  • Natnix PCoIP
  • Native ICA (w or without audio over UDP)
  • Local app access
  • HDX Optimization Pack 2.0

Now the issue with the first one (which is a Microsoft plugin is that it does not support Office365, it requires on-premises Lync/Skype) another issue that you cannot use VDI plugin and optimization pack at the same time, so if users are using VDI plugin and you want to switch to optimization pack you need to remove the VDI plugin

ICA uses TCP protcol works with most endpoints, since its basically running everyone directly on the server/vdi so the issue here is that we get no server offloading. So if we have 100 users running a video conference we might have a issue Smilefjes If the two other options are not available try to setup HDX realtime using audio over UDP for better audio performance. Both RDP and PCoIP use UDP for Audio/Video and therefore do not require any other specific customization.

But the problems with all these are that they make a tromboning effect and consumes more bandwidth and eats up the resources on the session host


Local App from Citrix access might be a viable option, which in essence means that a local application will be dragged into the receiver session, but this requires that the enduser has Lync/Skype installed. This also requires platinum licenses so not everyone has that + at it only supports Windows endpoints…

The last and most important piece is the HDX optimization pack which allows the use of server offloading using HDX media engine on the end user device

And the optimization pack supports Office365 with federated user and cloud only users. It also supports the latest clients (Skype for buisness) and can work in conjunction with Netscaler Gateway and Lync edge server for on-premises deployments. So means that we can get Mac/Linux/Windows users using server offloading, and with the latest release it also supports Office click-to-run and works with the native Skype UI

So using this feature we can offload the RSDH/VDI instances from CPU/Memory and eventually GPU directly back to the client. And Audio/video traffic is going to the endpoint directly and not to the remote session


Here is a simple test showing the difference between running Skype for buisness on a terminal server with and without HDX Optimization Pack 2.0

Permalink til innebygd bilde

Here is a complete blogpost on setting up HDX Optimization Pack 2.0 https://msandbu.wordpress.com/2016/01/02/citrix-hdx-optimization-pack-2-0/

Now for more of the this part, we also have Outlook. Which for many is quite the headache…. and that is most because of the OST files that is dropped in the %localappdata% folder for each user. Office ProPlus has a setting called fast access which means that Outlook will in most cases try to contact Office365 directly, but if the latency is becoming to high, the connection will drop and it will go and search trough the OST files.

Optimizing Outlook

Now this is the big elefant in the room and causes the most headaches. Since Outlook against Office365 can be setup in two modes either using Cached mode and the other using Online mode. Online modes uses direct access to Office365 but users loose features like instant search and such. In order to deliver a good user experience we need to compromise, the general guideline here is to configure cached mode with 3 months, and define to store the OST file (Which contains the emails, calender, etc) and is typically 60-80% than the email folder) on a network share. Since these OST files are by default created in the local appdata profile and using streaming profile management solutions aren’t typically a good fit for the OST file.

. Important to note that Microsoft supports having OST files on a network share, IF! there is adequate bandwidth and low latency… and only if there is one OST file and the users have Outlook 2010 SP1

NOTE: We can use other alternatives such as FSLogix, Unidesk to fix the Profile management in a better way.

Ill come back to the configuration part later in the Policy bits. And important to remember is to use Office Outlook over 2013 SP1 which gives MAPI over HTTP, instead of RCP over HTTP which does not consume that much bandwidth.


In regards to OneDrive try to exclude that from RSDH/VDI instances since the sync engine basically doesnt work very well and now that each user has 1 TB of storagee space, it will flood the storage quicker then anything else, if users are allowed to use it. Also there is no central management capabilities and network shares are not supported.

There are some changes in the upcoming unified client, in terms of deployment and management but still not a good solution.

You can remove it from the Office365 deployment by adding  this in the configuration file.

<ExcludeApp ID=»Groove» />

Optimization and group policy tuning

Now something that should be noted is that before installing Office365 click-to-run you should optimize the RSDH sessions hosts or the VDI instance. A blogpost which was published by Citrix noted a 20% in performance after some simple RSDH optimization was done.

Both Vmware and Citrix have free tools which allow to do RSDH/VDI Optimization which should be looked at before doing anything else.

Now the rest is mostly doing Group Policy tuning. Firstly we need to download the ADMX templates from Microsoft (either 2013 or 2016) then we need to add them to the central store.

We can then use Group Policy to manage the specific applications and how they behave. Another thing to think about is using Target Version group policy to manage which specific build we want to be on so we don’t have a new build each time Microsoft rolls-out a new version, because from experience I can tell that some new builds include new bugs –> https://msandbu.wordpress.com/2015/03/09/trouble-with-office365-shared-computer-support-on-february-and-december-builds/


Now the most important policies are stored in the computer configuration.

Computer Configuration –> Policies –> Administrative Templates –> Microsoft Office 2013 –> Updates

Here there are a few settings we should change to manage updates.

  • Enable Automatic Updates
  • Enable Automatic Upgrades
  • Hide Option to enable or disable updates
  • Update Path
  • Update Deadline
  • Target Version

These control how we do updates, we can specify enable automatic updates, without a update path and a target version, which will essentually make Office auto update to the latest version from Microsoft office. Or we can specify an update path (to a network share were we have downloaded a specific version) specify a target version) and do enable automatic updates and define a baseline) for a a specific OU for instance, this will trigger an update using a built-in task schedulerer which is added with Office, when the deadline is approaching Office has built in triggers to notify end users of the deployment. So using these policies we can have multiple deployment to specific users/computers. Some with the latest version and some using a specific version.

Next thing is for Remote Desktop Services only, if we are using pure RDS to make sure that we have an optimized setup.  NOTE: Do not touch if everything is working as intended.

Computer Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Remote Session Enviroment

  • Limit maximum color depth (Set to16-bits) less data across the wire)
  • Configure compression for RemoteFX data (set to bandwidth optimized)
  • Configure RemoteFX Adaptive Graphics ( set to bandwidth optimized)

Next there are more Office specific policies to make sure that we disable all the stuff we don’t need.

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Miscellaneous

  • Do not use hardware graphics acceleration
  • Disable Office animations
  • Disable Office backgrounds
  • Disable the Office start screen
  • Supress the recommended settings dialog

User Configuration –> Administrative Templates  –>Microsoft Office 2013 –> Global Options –> Customizehide

  • Menu animations (disabled!)

Next is under

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> First Run

  • Disable First Run Movie
  • Disable Office First Run Movie on application boot

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Subscription Activation

  • Automatically activate Office with federated organization credentials

Last but not least, define Cached mode for Outlook

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Account Settings –> Exchange –> Cached Exchange Modes

  • Cached Exchange Mode (File | Cached Exchange Mode)
  • Cached Exchange Mode Sync Settings (3 months)

Then specify the location of the OST files, which of course is somewhere else

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Miscellanous –> PST Settings

  • Default Location for OST files (Change this to a network share

Network and bandwidth tips

Something that you need to be aware of this the bandwidth usage of Office in a terminal server enviroment.

Average latency to Office is 50 – 70 MS

• 2000 «Heavy» users using Online mode in Outlook
About 20 mbps at peak

• 2000 «Heavy» users using Cached mode in Outlook
About 10 mbps at peak

• 2000 «Heavy» users using audio calls in Lync About 110 mbps at peak

• 2000 «Heavy» users working Office using RDP About 180 mbps at peak

Which means using for instance HDX optimization pack for 2000 users might “remove” 110 mbps of bandwidth usage.

Microsoft also has an application called Office365 client analyzer, which can give us a baseline to see how our network is against Office365, such as DNS, Latency to Office365 and such. And DNS is quite important in Office365 because Microsoft uses proximity based load balancing and if your DNS server is located elsewhere then your clients you might be sent in the wrong direction. The client analyzer can give you that information.


(We could however buy ExpressRoute from Microsoft which would give us low-latency connections directly to their datacenters, but this is only suiteable for LARGER enterprises, since it costs HIGH amounts of $$)


But this is for the larger enterprises which allows them to overcome the basic limitations of TCP stack which allow for limited amount of external connection to about 4000 connections at the same time. (One external NAT can support about 4,000 connections, given that Outlook consumes about 4 concurrent connections and Lync some as well)

Because Microsoft recommands that in a online scenario that the clients does not have more then 110 MS latency to Office365, and in my case I have about 60 – 70 MS latency. If we combine that with some packet loss or adjusted MTU well you get the picture Smilefjes 

Using Outlook Online mode, we should have a MAX latency of 110 MS above that will decline the user experience. Another thing is that using online mode disables instant search. We can use the exchange traffic excel calculator from Microsoft to calculate the amount of bandwidth requirements.

Some rule of thumbs, do some calculations! Use the bandwidth calculators for Lync/Exchange which might point you in the right direction. We can also use WAN accelerators (w/caching) for instance which might also lighten the burden on the bandwidth usage. You also need to think about the bandwidth usage if you are allow automatic updates enabled in your enviroment.

Troubleshooting tips

As the last part of this LOOONG post I have some general tips on using Office in a virtual enviroment. This is just gonna be a long list of different tips

  • For Hyper-V deployments, check VMQ and latest NIC drivers
  • 32-bits Office C2R typically works better then 64-bits
  • Antivirus ? Make Exceptions!
  • Remove Office products that you don’t need from the configuration, since this add extra traffic when doing downloads and more stuff added to the virtual machines
  • If you don’t use lync and audio service (disable the audio service! )
  • If using RDSH (Check the Group policy settings I recommended above)
  • If using Citrix or VMware (Make sure to tune the polices for an optimal experience, and using the RSDH/VDI optimization tools from the different vendors)
  • If Outlook is sluggish, check that you have adequate storage I/O to the network share (NO HIGH BANDWIDTH IS NOT ENOUGH IF STORED ON A SIMPLE RAID WITH 10k disks)
  • If all else failes on Outlook (Disable MAPI over HTTP) In some cases when getting new mail takes a long time try to disable this, used to be a known error)

Remote display protocols

Last but not least I want to mention this briefly, if you are setting up a new solution and thinking about choosing one vendor over the other. The first of is

  • Endpoint requirements (Thin clients, Windows, Mac, Linux)
  • Requirements in terms of GPU, Mobile workers etc)

Now we have done some tests, which shown the Citrix has the best feature across the different sub protocols

  • ThinWire (Best across high latency lines, using TCP works over 1800 MS Latency)
  • Framehawk (Work good at 20% packet loss lines)

While PcoIP performs a bit better then RDP, I have another blogpost on the subject here –> https://msandbu.wordpress.com/2015/11/06/putting-thinwire-and-framehawk-to-the-test/

Getting started with Vmware AppVolumes

Been working alot with different layering technologies these days and since I have an upcoming presentation on Appvirt vs Applayering, I decided it wasa time to write a blogpost about Vmware AppVolumes.

Now Vmware AppVolumes (formerly known as CloudVolumes) is a piece of layering technology. This technology uses VHD attach and uses a filter drive to handle application calls and file system redirects to either an application layer or an writeable volume. In terms of architecture it is pretty simple.


It consists of a AppVolumes manager and an AppVolumes agent which communicate. The cool thing about AppVolumes is that it can be used in a virtual enviroment or in a physical enviroment. NOTE: However the physical enviroment is limited to non-persistent physical machines, I’m guessing PVS machines or Vmware Mirage.

In terms of the physical enviroment it uses regular SMB based shares and using VHD In-guest mount

For virtual enviroments using ESX it uses VMDK direct attach, which is the prefered method.

The Appvolumes agent runs as an service and look at assigned resources either during login (instant-access) or reboot (machine or user based) The AppStack (Application layers, are read-only)

In terms of writeable volumes, we have 3 profiles to choose from

• User Installed Applications (UIA) Only

• User Profile Only

• UIA and User Profile

We define what kind of different profiles we want when setting up the writeable volumes. Multiple Appstacks can be added to a user, but only one writeable stack can be attached.

NOTE: You can download a trial from the Vmware website.

After installation is complete which takes about 10 minutes. All you need to do is go into the AppVolumes management console which is created as a shortcut on the desktop


Now after you login, its pretty simple either you can create an appstack or create an writeable volumes


So for instance you should have a clean virtual machine with the Appvolumes agent installed if you want to create an application layer (kinda like what we do when capturing App-V packages)

After an appstack is created an imported into AppVolumes you can see which applications are added

Here is a simple AppStack with FireFox and VLC and has not been assigned to anything yet.


After you have assigned an AppStack to a user, they can logout and login again.


It takes some time on a physical enviroment. Now after a user has logged it he/she will see the new applications appear on the desktop (If the shortcuts are published there) If we look in disk management we can see that the AppStack layer is attached


When creating an writeable volumes is pretty much the same, choose an entity in AD


Choose an profile and share path. Remember either only user based applications or user based applications + user profile


So AppVolumes is pretty cool piece of tech, and allows for simple application lifecycle management. Just create an AppStack install the applications and then deliver it using either on a physical setup or virtual on Vmware. Since it is agent based it can also be used in Citrix MCS or PVS for instance. However watch out for the writeable volumes, looks like Vmware want to move more over to the User Enviroment Manager.

Advanced backup options for Hyper-V 2012 R2 on Veeam

Some questions that come up again and again are advanced backup features in Hyper-V using Veeam. How does Veeam take backup from a Hyper-V host?

in a simple day to day virtual machine life, the read & writes consist of I/O traffic from a virtual machines to a VHD/VHDX. Residing on a network share. SAN/SMB and such.


When we setup Veeam to take backup of a virtual machine, what will happen is the following. First thing is that Veeam will trigger a snapshot using the Hyper-V Integration Services Shadow Copy Provider on that particular Hyper-V host that the virtual machine resides on. What will happen is that a AVHDX. This can either be done using an hardware VSS provier or software VSS provider.


A hardware provider manages shadow copies at the hardware level by working in conjunction with a hardware storage adapter or controller. A software provider manages shadow copies by intercepting I/O requests at the software level between the file system and the volume manager. The number of VMs in a group is limite depending on VSS provider. For a software VSS provider — 4 VMs, for a hardware VSS provider — 8 VMs.

NOTE: Using Offhost-proxy, requires an storage solution which supports an hardware transferrable shadow copies against a SAN. If we for instance use SMB based storage for Hyper-V we do not require this –> http://helpcenter.veeam.com/backup/hyperv/smb_off-host_backup.html

Using onhost backup, means that the transport role will be using on a Hyper-V host which has access to the running virtual machines.

Make sure that the integration services and running and up to date before doing online backup, you can check this from Hyper-V PowerShell –> Get-VM | FT Name, IntegrationServicesVersion
More troubleshooting on interation services here –> https://www.veeam.com/kb1855

So what will happen in a online backup is (If all the requirements are meet)

1: Veeam will interact with the Hyper-V host VSS service and request backup of the specific VM

2: The VSS writer on the hyper-v host will then forward the reques tto the Hyper-V Integration components inside the VM guest OS

3: The integration components will then communicate with the VSS framework inside the guest OS and request backup of all VSS-aware application inside the VM

4: The VSS writers of application aware VSS will then get application data suiteable for backup

5: After the applications are quiesced the VSS inside the Virtual machine takes an internal snapshot using the software based VSS

6: The integration service component notifices the Hypervisor that the VM is ready for backup, and Hyper-V will then take a snapshot of the volume which the Virtual machine is located on. Then a AVHDX file will be generated, all WRITES will be redirected there.

7: The volume snapshot is presented to Veeam either using Off-host or on-host backup. (If the Off-host proxy is not available it will fallback to on-host proxy on a designeted host)

8: Data will then be processed on the proxy server and then be moved to the repository


NOTE: Off-host setup requires an dedicated Hyper-V host (It requires Hyper-V to have access to the VSS providers) and in case of using Off-host it cannot be part of the Hyper-V cluster, and make sure it has READ only access to the LUN and that your storage vendors supports readable shadow volume copies.

On-host backup will use the Veeam transport service on the Hyper-V machine. If the volume is placed on a CSV volume, the CSV Software Shadow Copy Provider will be used for the snapshot creation process.

NOTE: During the backup process, Veeam will try to use its own CBT driver on Hyper-V host to make sure that it only takes backup of only the changed blocks. (Since Hyper-V does not natively provide CBT, this will change in Windows Server 2016)

NOTE: If CBT is not working on Veeam run the command Reset-HvVmChangeTracking PowerShell cmdlet http://helpcenter.veeam.com/backup/80/powershell/reset-hvvmchangetracking.html, or if the virtual machines are being shut down during backup process, try to disable ODX)

If Change block tracking is not enabled or not working as it should, the backup proxy will copy the virtual machine and use Veeam’s proprietary filtering mechanism. so Instead of tracking changed blocks of data, Veeam Backup & Replication filters out unchanged data blocks. During backup, Veeam Backup & Replication consolidates virtual disk content, scans through the VM image and calculates a checksum for every data block. Checksums are stored as metadata to backup files next to VM data.

So what about the more advanced features for Hyper-V

Hyper-V Settings

  • Enable Hyper-V guest quiescene

In case of application aware, The VM OS is suspsended and the content of the system memory and CPU is written to a dump file, in order to be able to perserve the data integrity of files with for instance transactional applications (This is known as offline backup)

Note that using this feature Veeam will not be able to perform application tasks like

    • Applying application-specific settings to prepare applications for VSS-aware restore at the next VM startup
    • Truncating transaction logs after successful backup or replication.
  • Take Crach consistent backup instead of suspending VM

If you do not want to suspend the virtual machine during backup, you can use crach consistent backup instead of suspending the virtual machine. This is equal to a hard reset of a virtual machine, this does not involve any downtime to a virtual machine but it does not preserve the data integrity of open files and may result in data loss.

  • Use changed block tracking data

Use the Veeam filter driver to look at changed blocks before data is copied to the offhost-veeam proxy or on-host proxy to the repository

  • Allow Processing of multiple VMs with a single volume snapshot

If you have multiple virtual machines within the same job, this feature will help reduce the load on the Hyper-V hosts.As this will trigger a volume snapshot for mulitple machines instead of a single virtual machine.

NOTE: The virtual machines much be located on the same host and must reside on a file share which uses the same VSS provider.

This is the first post of series – Veeam post and Hyper-V processing.

Hiding and publishing applications using XenDesktop 7.7 and Powershell

So when creating a delivery group in Studio you have limited capabilities into how we can control who gets access to a certain delivery group or application. NOTE This is not using Smart Access on the Netscaler, this is purely a Citrix Studio feature

. We have for instance filtering on users


And after we have created the delivery group we also have the option to define access rules, and as by default there are two rules that are created pr delivery group.


One rule that allows access using Access Gateway and one for direct connections using Storefront. So what if we need more customization options ? Enter PowerShell for Citrix…

First before doing anything we need to import the Citrix module in Powershell,

asnp citrix.*

Then we use the command Get-BrokerAccessPolicyRule (by default there are two rules for each delivery group. one called NAME_AG and one called NAME_Direct. The AG one is used for access via Netscaler Gateway, the other for direct to Storefront.

From this OS_AG Policy we can see that it is enabled, and allowedconnections are configured to be via Netscaler Gateway. And that it is filtered on Domain users.


We can see from the other policy, OS_Direct that it is set to enabled and that it is for connections notviaAG.


So how do we hide the delivery group for external users? The simples way is to set the accesspolicy true for AG connections to disable.

Set-BrokerAccessPolicyRule -name OS_AG -Enabled $false

Via Netscaler


Via Storefront


Or what if we want to exclude it for certain Active Directory User Group? For instance if there are some that are members of many active directory groups but are not allowed access to external sessions.

Set-BrokerAccessPolicyRule -Name OS_AG-ExcludedUserFilterEnabled $True -ExcludedUsers «TEST\Domain Admins»

This will disable external access to the delivery group for alle members of Domain Admins, even if they are allowed access by another group membership.

Azure Stack and the rest of the story

Now part two of my Azure Stack and Infrastructure story. Now Microsoft is doing a big leap in the Azure Stack release. With the release the current setup is moving towards more use of software-defined solution which are included in Windows Server 2016. This includes features like micro segmentation, load balancing, VXLAN, storage spaces direct (which is a hyperconverged confiuration of storage spaces)

We also have ARM which does the provisioning feature, using APIs and DSC for custom setup of virtual machine instances.

More details on the PaaS features will come during the upcoming weeks, and in the first release only Web Apps will be added.

So Microsoft is now providing features which before often being done by a third party and unlike Azure Pack this does not require any System Center parts and runs natively on Hyper-V


Now what else is missing in this picture? Since if we want to run this in a larger scale we need to think about the larger picture in a datacenter, using VXLAN will also require some custom setup.

Also with using Storage Spaces Direct in a Azure Stack fabric will also require RDMA networking infrastructure

(NOTE: Storage Spaces Direct has a limit in terms of max nodes)

(“Networking hardware Storage Spaces Direct relies on a network to communicate between hosts. For production deployments, it is required to have an RDMA-capable NIC (or a pair of NIC ports”) ref https://technet.microsoft.com/en-us/library/mt126109.aspx

This will also allow use the latest networking capabilities which is called SET Switch Embedded Teaming.

So in both cases you need a RDMA based infrastructure. So remember that! You need to rethink the physical networking. Another thing of the puzzle is backup. Now since Azure Stack delivers the management / proviosing and some fundamental services we need backup of our tenants data. Storage Spaces Direct deliver resilliency, but not backup. So for instance Arista has some good options in terms of RDMA, also since they support OMI which will allow for automation.

We need to enable a backup solution which can integrate to Hyper-V and have an REST API which can then allow us to build custom solutions into Azure Stack.

Also, a monitoring solution needs to be in place, Azure Stack adds alot of extra complecity in terms of infrastructure and alot of new services which are crucial especially in the networking/storage provider space. As of now I’m guessing that System Center will be the first monitoring solution which will support Azure Stack monitoring.

Another thing is load balancing, since we have more web based services for the different purposes and not MMC based consoles like we have in System Center, to deliver high-availability, (for instance the portal web, ARM interface and so on)

So in my ideal world, the Azure Stack drawning should look like this in my case.


Running Azure Stack on Vmware workstation nested

Well, Since the release of Azure Stack preview earlier today, I’ve been quite the busy bee… The only problem is that I didn’t have the adequate hardware to play around with it… Or so I thought.. I setup a virtual 2016 server on my Vmware Workstation.

Added some SATA based disks since I know this is “recommended hardware” as part of the PoC


Also remember to set it to Hyper-V (Unsupported)


After that I had to change some parameters in some of the scripts, since there is a PowerShell script which basically checks if the host has enough memory installed. This I changed within the Invoke-AzureStackDeploymentPreCheck.ps1


Now when you run the first AzureDeploy script it will mount the PoC install as a readonly VHD, and since the Invoke-AzureStackDeploymentPreCheck.ps1 is stored on the read only VHD you cannot do any changes to it. So you first need to change the DeployAzureStack script to mount the disk as read/write


You should also change the PoCFabric.xml which is located under AzureStackInstaller\PoCFabricInstaller and change the CPU and memory settings or else you won’t be able to complete the setup


After that, just look at it go!