Free eBook on Optimizing Citrix NetScaler and services

So alas, it is here!

This is something I have been working on for some time now, and my intention is that this is just the beginning  of something bigger.. (Hopefully)

For a couple of years now I have been writing for Packt Publishing and authored some books on NetScaler which has been a fun and a good learning experience. The problem with that is… These projects take alot of time! and the problem these days is that the releases are becoming more and more frequent and same goes for other underlying infrastructure which makes it cumbersome to have up-to date content available.

This is the first step in an attempt to create a full (free) NetScaler eBook, for the moment in time I decided to focus on Optimzing NetScaler traffic features. Hopefully other people will tag along as well, since there are so many bright minds in this community!

So what’s included in this initial release.
CPU Sizing
Memory Sizing
NIC Teaming and LACP
VLAN tagging
Jumbo Frames
NetScaler deployment in Azure
NetScaler Packet flow
TCP Profiles
VPX SSL limitations
SSL Profiles
Mobilestream
Compression
Caching
Front-end optimization
HTTP/2 and SPDY
Tuning for ICA traffic

Also I would like to thank my reviewers which actually did the job of reading through it and giving me good feedback! (and of course correcting my grammar as well) a special thanks to Carl Stalhood (http://carlstalhood.com) https://twitter.com/cstalhood a Citrix CTP who contributed with alot of content to this eBook as well.

Also to my other reviewers as well!

Carl Behrent https://twitter.com/cb_24

Dave Brett https://twitter.com/dbretty  (http://bretty.me.uk)

How do I get it?
By signing up using your email in the contact form below, and ill send you an PDF copy after the book is finished editing sometime during the weekend, wanted to get this blogpost out before the weekend to see the interest.

The reason why I want to have an email address is that it makes it easier for me so send update after a new major version is available. Also I want some statistics to see how many are actually using it to see if I should continue on with this project or not. The email addresses I get will not be used to anything else, so no newsletters or selling info to the mafia…

Feedback and how to contribute?
Any feedback/corrections/suggestions please send them to my email address msandbu@gmail.com also if you want to contribute to this eBook please mail me! since I’m not an expert my all means, so any good ideas should be included so it can be shared with others.

Getting started with Web based server management tools in Azure

Yesterday, Microsoft released a public public of some tools that Jeffrey Snover showed of at Microsoft Ignite last year, which was in essence basically just Server manager from within the Azure portal.

This tools is aimed for its first release to manage Windows Server 2016 servers, it can manage both Azure virtual machines and machines on-prem. So some of its capabilities:View and change system configuration

  • View performance across various resources and manage processes and services
  • Manage devices attached to the server
  • View event logs
  • View the list of installed roles and features
  • Use a PowerShell console to manage and automate


Source: http://blogs.technet.com/b/nanoserver/archive/2016/02/09/server-management-tools-is-now-live.aspx

So what we do is that we deploy a Server Manager Gateway which we want to manage our virtual machines (Remember that the Server Gateway needs to have an internet connection)

NOTE: If you want to deploy the Gateway feature on 2012 server you need to have WMF 5 installed, which you can fetch here –> WMF 5.0: https://www.microsoft.com/en-us/download/details.aspx?id=48729

So when we want to deploy –> Go into Azure –> New –> Server Management Tools –> Marketplace image

then we need to define the machine we want to connect to (Internal addresses, IPv4, IPv6 and FQDN)
So for the first run we need to create a gateway as well. If we want to add multiple servers that we want to manage we need to run this wizard again but then choose an existing gateway, for instance.
image

After we have created the instance we need to download the Gateway binaries and install on our enviroment

image

Then run the download from within the enviroment. Also important that if we want to manage non-domain based machines we need to run some parameters to add trusted hosts and such, as an example

winrm set winrm/config/client @{ TrustedHosts=»10.0.0.5″ }

REG ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1

NETSH advfirewall firewall add rule name=»WinRM 5985″ protocol=TCP dir=in localport=5985 action=allow (If you want specify firewall rules)

After you have installed the firewall rules. We need to specify credentials

image

After that is done we can now manage the machine from within Azure.

image

A better explanation on Framehawk

After speaking with Stephen Vilke (One of the brains behind Framehawk) the other day, he elaborated on what Framehawk actually is and what it actually isn’t.

You know I have been caught up thinking about Framehawk as a simple display protocol and ran a bunch of comparisons between it and ThinWire and also with PCoIP and RDP… And my conclusion was simply:
it uses alot more bandwidth, cpu and memory then the other protocols display protocols.

But if we think about it, why is the reason why people implement ThinWire plus for instance? to SAVE bandwidth, because they have limited bandwidth capacity, and in order to get more users on to their platform. Why do people implement Cloudbridge in order to optimize traffic? simple.. to SAVE bandwidth.

When thinking about Framehawk now, I have this simple scenario.

ThinWire is simply put it the Toyota Prius, its cheap, moves from A to B and gives an OK driver experience. And since it is cheap we can get many of these.

Framehawk on the other hand is a friccking truck with Jet engines!!,  its not cheap, but it plows trough everything at a ridiculous speed!! (directly translated it work if we have alot of packet loss) and it gives one hell of a driver experience. So the end goal is actually to focus on increasing productivity, since the apps behave faster and every click gets a response, everytime!

So Framehawk is not about saving anything, on the contrary it uses the bandwidth it can get but its also to give the end-user a much better experience it these mobile days were we are faced with much more packet-loss when we were before ,when latency and bandwidth limits were a much bigger problem. Another thing to think about is that giving a better user experience even thou we are faced with these network issues, might allow our users to be even more productive, which in the end results in more money for our buisness.

Another thing to remember is that unlike other protocols which focues on moving the 0 1 0 1 across the wire and then adapting content along the way, so for instance if we start scrolling on a document under a packet loss connection, the protocol is going to spend alot of time trying to repair every packet on the wire as it goes along even thou the end-user just scrolled down 2 pages and want to read a particular page, why spend bandwidth on sending the entire scrolling sequence down the wire?

All the end users want to see is the page after scrolling down, they don’t care if all the packets get pushed down the wire, the just want to see the end-result, which is basically what Framehawk is focusing on.

To quote a good post from Citrix blogs:

A «perfect bytestream» is a computer thing, not a human one. Humans wants to know «are we there yet, and did I like the view out the window?», not «are we there yet, and did every packet arrive properly?» :)

Now since the introduction of Framehawk there is still a few features I would like to see in the product so it matures a bit.

  • Support for Netscaler Gateway HA
  • Recalibration during connections
  • Support for AppFlow

Other then that, the latest version of NetScaler build 11. 64 introduced suppor for Unified Gateway, so its more stuff the NetScaler team needs to fix.

Hopefully this gives a good explanation of what Framehawk is and what it isn’t.

Vmware Horizon 7 announced

Earlier today I saw on a couple of blogspost that Vmware was going to announce Horizon 7 later today. So when I read the posts I was blown away in what type of features that are coming in the release.

So what’s included in the upcoming release?

  • Project Fargo (VMFork) which in essence the ability to clone a running VM on the fly, just-in-time-desktops. Doing master image updates is as simple as updating the parent virtual machine, a user will automatically get an updated desktop at next login. It is kinda like the same what we have with AppVolumes and delivering of AppStacks, but taken to a whole new level. You can read more about it here –> http://www.yellow-bricks.com/2014/10/07/project-fargo-aka-vmfork-what-is-it/ important to remember this is not like linked-clones, the virtual machines are all running and are updated on the fly. so no composer! but of course this is going to put more strain on the backend storage providers.
    also important is that this does not support NFS as of now

image

  • New Horizon Clients version 4 (With new clients for Windows, Mac and Linux, Android, iOS) with increased performance over WAN etc) also the required version if we want to use the new display protocol)
  • Updated Identity Manager (As part of the stack and will provide the authentication mechanism across the entire infrastructure using SAML)
  • Smart Policies (Customization of desktops and users identity of a running session) application blocking, PCoIP policies and such.
  • URL Content Redirection (Allows for URL redirection from within a remote session to be redirected to a local browser running on the Endpoint
  • AMD Graphics support for vSGA
  • Intel vDGA support with Intel Xeon E3
  • Improved printing experience (Reducing bandwidth and increasing speed of printing)
  • Blast Extreme (New remote display protocol which is optimized for mobile users) Which apparently has alot lower bandwidth requirements then PCoIP. It is also optimized for NVIDIA Grid. Now in terms of WAN performance, PCoIP has not been anywhere near what Citrix can deliver on using ThinWire or Framehawk so I belive that this is a good call that VMware can move ahead with their own display protocol which does more calibration on the fly.

It is going to be interesting to see how the remote display protocol compares to PCoIP and the other on the marked. And my guess is that since Blast is alot more bandwidth friendly. Also looks like they are investing more into the different aspects of the protocol itself.

PCoIP & Blast Extreme: Feature Parity
Source: http://www.vladan.fr/vmware-horizon-7-details-announced

Some other new stuff which is part of the release is the support for Horizon Air Hybrid Mode, which in essence is moving the control plane to in the Cloud (Similar to what Citrix is doing with their Workspace Cloud)

We can also look at the earlier annoucements of AppVolumes 3.0 as well, which is a perfect stack into this mix in terms of flexible application delivery, of course this is not without compromising some features, but it looks like VMware is becoming a provider of the unified stack. Just hope that they can integrate some of the management components a bit so it feels a bit more like an integrated stack.

But! it seems like Vmware has been quite busy with this release, this is also another complete story when combined with NSX and micro-segmentation in terms of delivering a secure desktop to any device, but I just hope that the display protocol is as good as they say. ill belive it when I see it Smilefjes 

Sources:

http://vthoughtsofit.blogspot.no/
http://www.vladan.fr/vmware-horizon-7-details-announced/

Application virtualization vs Application layering

So this is a blogpost which is mostly about the session I had on NIC this year, where I held this presentation. Where I talked about different technologies from the app-virt and app-layering landscape and discussed the benefits / cons using these types of products. In these days alot of buisnesses are virtualizing their applications. In some cases it makes sense, but on the other hand there is a new technology appearing in the landscape which is application layering so this post is about showing the difference. However since this is a pretty long subject im not gonna cover everything in the same post Smilefjes and no its not a VS battle…

So where are we today? We have our VM-template which are used to deploy virtual machines which can use PXE or some built-in hypervisor deployment tool like vCenter or SCVMM.

image

We use that to deploy virtual machines and if we need to update the VM-template we have to start it and deploy patches to it, simple. However we need other tools like System Center or WSUS to have the other machine up to date because there is no link between the VM-template and the machines that we have provisioned. Another thing is application installation, where we for many year have been using Group Policy/scripts/deployment tools/system center to install application to the virtual machines that we deploy. Or we could pre-install these applications onto the VM-template (Golden Image) and save us some trouble. Now installing multiple applications onto a machine also needs to need to have ways to update the applications. This is typically done using an MSI update or using System Center and replace the existing software with a new one. Now installing all these applications onto a machine requires that it writes some registry entries, files to the drive and maybe even some extra components which the software is dependant on.

Now we have been doing this for years so what are the issues??

  • Big golden image (By preinstalling many applications on the golden image we might have longer deployment time, application we don’t need, slowing down the VM-template)
  • Patch management (How are we going to manage patching application across 200 – 500 virtual machines?)
  • Application Compability ( Some application might require different versions of the Visual C# library for instance which are non-compatible)
  • Applicaiton Security (Some applications do wierd shit, what can we do about those?
  • Application lifecycle management (How can we easily add and replace existing applications, we might also need different versions of the same application)
  • Software rot, registry bloating (You know there is a reason why there are registry cleaners, right?)

So what about Application virtualization?

image

Using application virtualization, applicated are isolated within their own “bubble” which includes virtual filesystem and registry and other required components like dll libraries and such. Since each application is isolated their are not allowed to communicate with each other. In some cases we can define so that the application can read/write to the underlaying machine. We also have flexible delivery methods, either using cached mode where the package is stored on a local machine or using streaming mechanisms. In these cases we avoid

  • Bloated filesystem / registry
  • Application conflicts
  • Added application security
  • No application innstalled on the underlying OS
  • Multiple runtime enviroment
  • Easier to do app customization
  • Easier update management

Now there are two products I focused on in the presentation which are ThinApp and App-V5.

App-V

Now App-V requires an agent installed on each host (Has some limitations about the supports OS) but is flexible in terms or using caching or streaming (called Shared Content Store)

And we can either manage it using App-V infrastructure or using standalone using PowerShell cmdlets. We can also integrate it with System Center and even Citrix.
However in many cases we need to have older versions of Internet Explorer running when we for instance are upgrading to another Operating System. However App-V does not support this. We have also seen that adding App-V puts extra I/O traffic on the host compared to other app-virt solutions.

image

Vmware ThinApp

Vmware ThinApp is a bit different, it does not have any infrastructure. You basically have the Captuing agent which you use to create an app-virt package and then the package is self-contained. When you run an application the ThinApp agent is actually running beneath the package that we captured. ThinApp can be created as an MSI or EXE file which allows for easily deployment using existing tools like System Center or other deployment software. Most of the logic of a ThinApp package is stored within the package.ini file.

However if we want some form of management we need horizon setup, and there is no PowerShell support, in order to use that we need to develop some cmdlets using the SDK. Since there is no infrastructure as well we don’t have ny usage tracking feature. However we have a handy update feature using Appsync which is part of each package ini file.

image

Now both of these solution use a sequencing VM which is used to do the package/capturing process. ThinApp however has a larger amount of supported operating systems it supports.

Now what about application layering ? Important to remember that AppVirt runs in user-space, and therefore there are some restrictions to what it can run (Antivirus, VPN, Boot stuff, Kernel Stuff, Drivers and so on)

Application layering is a bit different, it basically uses a filter driver to merge different virtual disks which will then make up a virtual machine.

image

So we have the windows operating system which might be its own layer (master image) and then we have different layers which run on top, which might consist of application layers, which are read only  (VHD disks for instance) and might also consist of a Personalization layer which might be read and write layer.

Now using application layering, application will behave normally like as they would when they are innstalled to an operating system, since its pretty much a merger of different VHD disks.

Since they aren’t isolated, the “capturing process” is much more simple. Unlike when doing app-virt where the sequcing part might take a loooong time. We can then add different applications to different layers and we can for instance distribute the read/write across different layers where are stored different places. This allows for simpler application lifecycle management for instance if we have multiple virtual machines using the same application layer, we can just update the main application layer and the virtual machines will get the new application.

In the application layer space there are three products/vendors I focused on.

  • Unidesk
  • Vmware AppVolumes
  • Citrix AppDisks

NOTE: There are multiple vendors in this space as well.

UniDesk

Now Unidesk is the clear leader in this space, since they have support for multiple hypervisors and even Azure! They can also do OS-layering as part of the setup.

image

They can layer pretty much everything since they are integrated into the hypervisor stack. So its not entirely correct to state that they are an application layering vendor. They are a layering vendor period Smilefjes
NOTE: The only thing I found out they cannot layer is cake.. Plus they have an Silverlight based console + they don’t have instant app access like some of the other do. But there is a new version around the corner.

Citrix AppDisks

Then we have Citrix AppDisks which is going to be released in the upcoming version 7.8. Now the good thing with Appdisks is that it is integrated into the Citrix Studio Console. AppDisks is Applications only, Citrix has other solutions for the OS-layer (MCS or PVS) which both AppDisks will support. They also have PVD for the profile which can be writeable, and they have profile manager as well which will make Citrix a good all-round solution.

image

Now AppDisks support as of now XenServer and Vmware and suprise! you need an existing Citrix infrastructure. So no RDS/View. AppDisks also has no instant-app delivery, and is only for virtual machines.

Vmware AppVolumes

Now the last piece of the puzzle is AppVolumes from Vmware. which is agent based and runs on top of the operating system. The good thing about this is that they have instant application delivery, they can also function on a physical devices since they are agent based. However you should be aware of the requirements for using AppVolumes on a physcal device (they should be non-persistent and have constant network access, I smell Mirage)

image

Its has a simple HTML5 based management, it only does hypervisor integration with ESX and vCenter however but can be used in RDS / Citrix enviroments, just install the agents and do some management and you are good to go.

Now we have seen some of the different technologies out there, to summurize I would state the following.

Application virtulization should be used for the following:

  • Application Isolation
  • Mutliple runtime versions
  • Streaming to non-persistent
  • Application compability

Application layering should be used for the following:

  • Application lifecycle management
  • Image management
  • Profile management (if supported by vendors)
  • Applications which require drivers / boot stuff

Now you can also view the slidedeck of the presentation here –>

Videos for the different vendors can be found here –>

And lastly, what is coming in the future? Most likely this will be container based applications, which have their own networking stack, more security requirements and contained within their own kernel space. Here we have providers such as Turbo.net which have been delivering container based applications before Windows Announced container support for their operating system

Office365 on Terminal server done right

So this is a blogpost based upon a session I had at NIC conference, where I spoke about how to optimize the delivery of Office365 in a VDI/RSDH enviroment.

There are multiple stuff we need to think / worry about. Might seem a bit negative, but that is not the idea just being realistic Smilefjes

So this blogpost will cover the following subjects

  • Federation and sync
  • Installing and managing updates
  • Optimizing Office ProPlus for VDI/RDS
  • Office ProPlus optimal delivery
  • Shared Computer Support
  • Skype for Buisness
  • Outlook
  • OneDrive
  • Troubleshooting and general tips for tuning
  • Remote display protocols and when to use when.

So what is the main issue with using Terminal Servers and Office365? The Distance….

This is the headline for a blogpost on Citrix blogs about XenApp best pratices

image_thumb5

So how to fix this when we have our clients on one side, the infrastructure in another and the Office365 in a different region ? Seperated with long miles and still try to deliver the best experience for the end-user, so In some case we need to compromise to be able to deliver the best user experience. Because that should be our end goal Deliver the best user experience

image_thumb1

User Access

First of is, do we need to have federation or just plain password sync in place? Using password sync is easy and simple to setup and does not require any extra infrastructure. We can also configure it to use Password hash sync which will allow Azure AD to do the authentication process. Problem with doing this is that we lose a lot of stuff which we might use on an on-premises solution

  • Audit policies
  • Existing MFA (If we use Azure AD as authentication point we need to use Azure MFA)
  • Delegated Access via Intune
  • Lockdown and password changes (Since we need change to be synced to Azure AD before the user changes will be taken into effect)

NOTE: Now since I am above average interested in Netscaler I wanted to include another sentence here, for those that don’t know is that Netscaler with AAA can in essence replace ADFS since Netscaler now supports SAML iDP. Some important issues to note is that Netscaler does not support • Single Logout profile; • Identity Provider Discovery profile from the SAML profiles. We can also use Netscaler Unified Gateway with SSO to Office365 with SAML. The setup guide can be found here

https://msandbu.wordpress.com/2015/04/01/netscaler-and-office365-saml-idp-setup/

NOTE: We can also use Vmware Identity manager as an replacement to deliver SSO.

Using ADFS gives alot of advantages that password hash does not.

  • True SSO (While password hash gives Same Sign-on)
  • If we have Audit policies in place
  • Disabled users get locked out immidietly instead of 3 hours wait time until the Azure AD connect syng engine starts replicating, and 5 minutes for password changes.
  • If we have on-premises two-factor authentication we can most likely integrate it with ADFS but not if we have only password hash sync
  • Other security policies, like time of the day restrictions and so on.
  • Some licensing stuff requires federation

So to sum it up, please use federation

Initial Office configuration setup

Secondly, using the Office suite from Office365 uses something called Click-to-run, which is kinda an app-v wrapped Office package from Microsoft, which allows for easy updates from Microsoft directly instead of dabbling with the MSI installer.

In order to customize this installer we need to use the Office deployment toolkit which basically allows us to customize the deployment using an XML file.

The deployment tool has three switches that we can use.

setup.exe /download configuration.xml

setup.exe /configure configuration.xml

setup.exe /packager configuration.xml

NOTE: Using the /packager creates an App-V package of Office365 Click-To-run and requires a clean VM like we do when doing sequencing on App-V, which can then be distributed using existing App-V infrastructure or using other tools. But remember to enable scripting on the App-V client and do not alter the package using sequencing tool it is not supported.

The download part downloads Office based upon the configuration file here we can specify bit editions, versions number, office applications to be included and update path and so on. The Configuration XML file looks like this.

<Configuration>

<Add OfficeClientEdition=»64″ Branch=»Current»>

<Product ID=»O365ProPlusRetail»>

<Language ID=»en-us»/>

</Product>

</Add>

<Updates Enabled=»TRUE» Branch=»Business» UpdatePath=»\\server1\office365″ TargetVersion=»16.0.6366.2036″/>

<Display Level=»None» AcceptEULA=»TRUE»/>

</Configuration>

Now if you are like me and don’t remember all the different XML parameters you can use this site to customize your own XML file –> http://officedev.github.io/Office-IT-Pro-Deployment-Scripts/XmlEditor.html

When you are done configuring the XML file you can choose the export button to have the XML file downloaded.

If we have specified a specific Office version as part of the configuration.xml it will be downloaded to a seperate folder and storaged locally when we run the command setup.exe /download configuration.xml

NOTE: The different build numbers are available here –> http://support2.microsoft.com/gp/office-2013-365-update?

When we are done with the download of the click-to-run installer. We can change the configuration file to reflect the path of the office download

<Configuration> <Add SourcePath=»\\share\office» OfficeClientEdition=»32″ Branch=»Business»>

When we do the setup.exe /configure configuration.xml path

Deployment of Office

The main deployment is done using the setup.exe /configure configuration.xml file on the RSDH host. After the installation is complete

Shared Computer Support

<Display Level="None" AcceptEULA="True" /> 
<Property Name="SharedComputerLicensing" Value="1" />

In the configuration file we need to remember to enable SharedComputerSupport licensing or else we get this error message.

image_thumb11

If you forgot you can also enable is using this registry key (just store it as an .reg file)

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\15.0\ClickToRun\Configuration]
«InstallationPath»=»C:\\Program Files\\Microsoft Office 15»
«SharedComputerLicensing»=»1

Now we are actually done with the golden image setup, don’t start the application yet if you want to use it for an image. Also make sure that there are no licenses installed on the host, which can be done using this tool.

cd ‘C:\Program Files (x86)\Microsoft Office\Office15’
cscript.exe .\OSPP.VBS /dstatus

image_thumb31

This should be blank!

Another issue with this is that when a user starts an office app for the first time he/she needs to authenticate once, then a token will be stored locally on the %localappdata%\Microsoft\Office\15.0\Licensing folder, and will expire within a couple of days if the user is not active on the terminalserver. Think about it, if we have a large farm with many servers that might be the case and if a user is redirected to another server he/she will need to authenticate again. If the user is going against one server, the token will automatically refresh.
NOTE: This requires Internet access to work.

And important to remember that the Shared Computer support token is bound to the machine, so we cannot roam that token around computers or using any profile management tool.

But a nice thing is that if we have ADFS setup, we can setup Office365 to automatically activate against Office365, this is enabled by default. So no pesky logon screens.

Just need to add the ADFS domain site to trusted sites on Internet Explorer and define this settings as well

Automatic logon only in Intranet Zone

image

Which allows us to basically resolve the token issue with Shared Computer Support Smilefjes

Optimizing Skype for Buisness

So in regards to Skype for Buisness what options do we have in order to deliver a good user experience for it ? We have four options that I want to explore upon.

  • VDI plugin
  • Native RDP with UDP
  • Natnix PCoIP
  • Native ICA (w or without audio over UDP)
  • Local app access
  • HDX Optimization Pack 2.0

Now the issue with the first one (which is a Microsoft plugin is that it does not support Office365, it requires on-premises Lync/Skype) another issue that you cannot use VDI plugin and optimization pack at the same time, so if users are using VDI plugin and you want to switch to optimization pack you need to remove the VDI plugin

ICA uses TCP protcol works with most endpoints, since its basically running everyone directly on the server/vdi so the issue here is that we get no server offloading. So if we have 100 users running a video conference we might have a issue Smilefjes If the two other options are not available try to setup HDX realtime using audio over UDP for better audio performance. Both RDP and PCoIP use UDP for Audio/Video and therefore do not require any other specific customization.

But the problems with all these are that they make a tromboning effect and consumes more bandwidth and eats up the resources on the session host

image_thumb7

Local App from Citrix access might be a viable option, which in essence means that a local application will be dragged into the receiver session, but this requires that the enduser has Lync/Skype installed. This also requires platinum licenses so not everyone has that + at it only supports Windows endpoints…

The last and most important piece is the HDX optimization pack which allows the use of server offloading using HDX media engine on the end user device

And the optimization pack supports Office365 with federated user and cloud only users. It also supports the latest clients (Skype for buisness) and can work in conjunction with Netscaler Gateway and Lync edge server for on-premises deployments. So means that we can get Mac/Linux/Windows users using server offloading, and with the latest release it also supports Office click-to-run and works with the native Skype UI

So using this feature we can offload the RSDH/VDI instances from CPU/Memory and eventually GPU directly back to the client. And Audio/video traffic is going to the endpoint directly and not to the remote session

image_thumb51

Here is a simple test showing the difference between running Skype for buisness on a terminal server with and without HDX Optimization Pack 2.0

Permalink til innebygd bilde

Here is a complete blogpost on setting up HDX Optimization Pack 2.0 https://msandbu.wordpress.com/2016/01/02/citrix-hdx-optimization-pack-2-0/

Now for more of the this part, we also have Outlook. Which for many is quite the headache…. and that is most because of the OST files that is dropped in the %localappdata% folder for each user. Office ProPlus has a setting called fast access which means that Outlook will in most cases try to contact Office365 directly, but if the latency is becoming to high, the connection will drop and it will go and search trough the OST files.

Optimizing Outlook

Now this is the big elefant in the room and causes the most headaches. Since Outlook against Office365 can be setup in two modes either using Cached mode and the other using Online mode. Online modes uses direct access to Office365 but users loose features like instant search and such. In order to deliver a good user experience we need to compromise, the general guideline here is to configure cached mode with 3 months, and define to store the OST file (Which contains the emails, calender, etc) and is typically 60-80% than the email folder) on a network share. Since these OST files are by default created in the local appdata profile and using streaming profile management solutions aren’t typically a good fit for the OST file.

. Important to note that Microsoft supports having OST files on a network share, IF! there is adequate bandwidth and low latency… and only if there is one OST file and the users have Outlook 2010 SP1

NOTE: We can use other alternatives such as FSLogix, Unidesk to fix the Profile management in a better way.

Ill come back to the configuration part later in the Policy bits. And important to remember is to use Office Outlook over 2013 SP1 which gives MAPI over HTTP, instead of RCP over HTTP which does not consume that much bandwidth.

OneDrive

In regards to OneDrive try to exclude that from RSDH/VDI instances since the sync engine basically doesnt work very well and now that each user has 1 TB of storagee space, it will flood the storage quicker then anything else, if users are allowed to use it. Also there is no central management capabilities and network shares are not supported.

There are some changes in the upcoming unified client, in terms of deployment and management but still not a good solution.

You can remove it from the Office365 deployment by adding  this in the configuration file.

<ExcludeApp ID=»Groove» />

Optimization and group policy tuning

Now something that should be noted is that before installing Office365 click-to-run you should optimize the RSDH sessions hosts or the VDI instance. A blogpost which was published by Citrix noted a 20% in performance after some simple RSDH optimization was done.

Both Vmware and Citrix have free tools which allow to do RSDH/VDI Optimization which should be looked at before doing anything else.

Now the rest is mostly doing Group Policy tuning. Firstly we need to download the ADMX templates from Microsoft (either 2013 or 2016) then we need to add them to the central store.

We can then use Group Policy to manage the specific applications and how they behave. Another thing to think about is using Target Version group policy to manage which specific build we want to be on so we don’t have a new build each time Microsoft rolls-out a new version, because from experience I can tell that some new builds include new bugs –> https://msandbu.wordpress.com/2015/03/09/trouble-with-office365-shared-computer-support-on-february-and-december-builds/

image

Now the most important policies are stored in the computer configuration.

Computer Configuration –> Policies –> Administrative Templates –> Microsoft Office 2013 –> Updates

Here there are a few settings we should change to manage updates.

  • Enable Automatic Updates
  • Enable Automatic Upgrades
  • Hide Option to enable or disable updates
  • Update Path
  • Update Deadline
  • Target Version

These control how we do updates, we can specify enable automatic updates, without a update path and a target version, which will essentually make Office auto update to the latest version from Microsoft office. Or we can specify an update path (to a network share were we have downloaded a specific version) specify a target version) and do enable automatic updates and define a baseline) for a a specific OU for instance, this will trigger an update using a built-in task schedulerer which is added with Office, when the deadline is approaching Office has built in triggers to notify end users of the deployment. So using these policies we can have multiple deployment to specific users/computers. Some with the latest version and some using a specific version.

Next thing is for Remote Desktop Services only, if we are using pure RDS to make sure that we have an optimized setup.  NOTE: Do not touch if everything is working as intended.

Computer Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Remote Session Enviroment

  • Limit maximum color depth (Set to16-bits) less data across the wire)
  • Configure compression for RemoteFX data (set to bandwidth optimized)
  • Configure RemoteFX Adaptive Graphics ( set to bandwidth optimized)

Next there are more Office specific policies to make sure that we disable all the stuff we don’t need.

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Miscellaneous

  • Do not use hardware graphics acceleration
  • Disable Office animations
  • Disable Office backgrounds
  • Disable the Office start screen
  • Supress the recommended settings dialog

User Configuration –> Administrative Templates  –>Microsoft Office 2013 –> Global Options –> Customizehide

  • Menu animations (disabled!)

Next is under

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> First Run

  • Disable First Run Movie
  • Disable Office First Run Movie on application boot

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Subscription Activation

  • Automatically activate Office with federated organization credentials

Last but not least, define Cached mode for Outlook

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Account Settings –> Exchange –> Cached Exchange Modes

  • Cached Exchange Mode (File | Cached Exchange Mode)
  • Cached Exchange Mode Sync Settings (3 months)

Then specify the location of the OST files, which of course is somewhere else

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Miscellanous –> PST Settings

  • Default Location for OST files (Change this to a network share

Network and bandwidth tips

Something that you need to be aware of this the bandwidth usage of Office in a terminal server enviroment.

Average latency to Office is 50 – 70 MS

• 2000 «Heavy» users using Online mode in Outlook
About 20 mbps at peak

• 2000 «Heavy» users using Cached mode in Outlook
About 10 mbps at peak

• 2000 «Heavy» users using audio calls in Lync About 110 mbps at peak

• 2000 «Heavy» users working Office using RDP About 180 mbps at peak

Which means using for instance HDX optimization pack for 2000 users might “remove” 110 mbps of bandwidth usage.

Microsoft also has an application called Office365 client analyzer, which can give us a baseline to see how our network is against Office365, such as DNS, Latency to Office365 and such. And DNS is quite important in Office365 because Microsoft uses proximity based load balancing and if your DNS server is located elsewhere then your clients you might be sent in the wrong direction. The client analyzer can give you that information.

image_thumb3

(We could however buy ExpressRoute from Microsoft which would give us low-latency connections directly to their datacenters, but this is only suiteable for LARGER enterprises, since it costs HIGH amounts of $$)

image

But this is for the larger enterprises which allows them to overcome the basic limitations of TCP stack which allow for limited amount of external connection to about 4000 connections at the same time. (One external NAT can support about 4,000 connections, given that Outlook consumes about 4 concurrent connections and Lync some as well)

Because Microsoft recommands that in a online scenario that the clients does not have more then 110 MS latency to Office365, and in my case I have about 60 – 70 MS latency. If we combine that with some packet loss or adjusted MTU well you get the picture Smilefjes 

Using Outlook Online mode, we should have a MAX latency of 110 MS above that will decline the user experience. Another thing is that using online mode disables instant search. We can use the exchange traffic excel calculator from Microsoft to calculate the amount of bandwidth requirements.

Some rule of thumbs, do some calculations! Use the bandwidth calculators for Lync/Exchange which might point you in the right direction. We can also use WAN accelerators (w/caching) for instance which might also lighten the burden on the bandwidth usage. You also need to think about the bandwidth usage if you are allow automatic updates enabled in your enviroment.

Troubleshooting tips

As the last part of this LOOONG post I have some general tips on using Office in a virtual enviroment. This is just gonna be a long list of different tips

  • For Hyper-V deployments, check VMQ and latest NIC drivers
  • 32-bits Office C2R typically works better then 64-bits
  • Antivirus ? Make Exceptions!
  • Remove Office products that you don’t need from the configuration, since this add extra traffic when doing downloads and more stuff added to the virtual machines
  • If you don’t use lync and audio service (disable the audio service! )
  • If using RDSH (Check the Group policy settings I recommended above)
  • If using Citrix or VMware (Make sure to tune the polices for an optimal experience, and using the RSDH/VDI optimization tools from the different vendors)
  • If Outlook is sluggish, check that you have adequate storage I/O to the network share (NO HIGH BANDWIDTH IS NOT ENOUGH IF STORED ON A SIMPLE RAID WITH 10k disks)
  • If all else failes on Outlook (Disable MAPI over HTTP) In some cases when getting new mail takes a long time try to disable this, used to be a known error)

Remote display protocols

Last but not least I want to mention this briefly, if you are setting up a new solution and thinking about choosing one vendor over the other. The first of is

  • Endpoint requirements (Thin clients, Windows, Mac, Linux)
  • Requirements in terms of GPU, Mobile workers etc)

Now we have done some tests, which shown the Citrix has the best feature across the different sub protocols

  • ThinWire (Best across high latency lines, using TCP works over 1800 MS Latency)
  • Framehawk (Work good at 20% packet loss lines)

While PcoIP performs a bit better then RDP, I have another blogpost on the subject here –> https://msandbu.wordpress.com/2015/11/06/putting-thinwire-and-framehawk-to-the-test/

Getting started with Vmware AppVolumes

Been working alot with different layering technologies these days and since I have an upcoming presentation on Appvirt vs Applayering, I decided it wasa time to write a blogpost about Vmware AppVolumes.

Now Vmware AppVolumes (formerly known as CloudVolumes) is a piece of layering technology. This technology uses VHD attach and uses a filter drive to handle application calls and file system redirects to either an application layer or an writeable volume. In terms of architecture it is pretty simple.

image

It consists of a AppVolumes manager and an AppVolumes agent which communicate. The cool thing about AppVolumes is that it can be used in a virtual enviroment or in a physical enviroment. NOTE: However the physical enviroment is limited to non-persistent physical machines, I’m guessing PVS machines or Vmware Mirage.

In terms of the physical enviroment it uses regular SMB based shares and using VHD In-guest mount

For virtual enviroments using ESX it uses VMDK direct attach, which is the prefered method.

The Appvolumes agent runs as an service and look at assigned resources either during login (instant-access) or reboot (machine or user based) The AppStack (Application layers, are read-only)

In terms of writeable volumes, we have 3 profiles to choose from

• User Installed Applications (UIA) Only

• User Profile Only

• UIA and User Profile

We define what kind of different profiles we want when setting up the writeable volumes. Multiple Appstacks can be added to a user, but only one writeable stack can be attached.

NOTE: You can download a trial from the Vmware website.

After installation is complete which takes about 10 minutes. All you need to do is go into the AppVolumes management console which is created as a shortcut on the desktop

image

Now after you login, its pretty simple either you can create an appstack or create an writeable volumes

image

So for instance you should have a clean virtual machine with the Appvolumes agent installed if you want to create an application layer (kinda like what we do when capturing App-V packages)

After an appstack is created an imported into AppVolumes you can see which applications are added

Here is a simple AppStack with FireFox and VLC and has not been assigned to anything yet.

image

After you have assigned an AppStack to a user, they can logout and login again.

image

It takes some time on a physical enviroment. Now after a user has logged it he/she will see the new applications appear on the desktop (If the shortcuts are published there) If we look in disk management we can see that the AppStack layer is attached

image

When creating an writeable volumes is pretty much the same, choose an entity in AD

image

Choose an profile and share path. Remember either only user based applications or user based applications + user profile

image

So AppVolumes is pretty cool piece of tech, and allows for simple application lifecycle management. Just create an AppStack install the applications and then deliver it using either on a physical setup or virtual on Vmware. Since it is agent based it can also be used in Citrix MCS or PVS for instance. However watch out for the writeable volumes, looks like Vmware want to move more over to the User Enviroment Manager.