Office365 on Terminal Servers done right

So this is a blogpost based upon a session I had at NIC conference, where I spoke about how to optimize the delivery of Office365 in a VDI/RSDH enviroment.

There are multiple stuff we need to think / worry about. Might seem a bit negative, but that is not the idea just being realistic Smilefjes

So this blogpost will cover the following subjects

  • Federation and sync
  • Installing and managing updates
  • Optimizing Office ProPlus for VDI/RDS
  • Office ProPlus optimal delivery
  • Shared Computer Support
  • Skype for Buisness
  • Outlook
  • OneDrive
  • Troubleshooting and general tips for tuning
  • Remote display protocols and when to use when.

So what is the main issue with using Terminal Servers and Office365? The Distance….

This is the headline for a blogpost on Citrix blogs about XenApp best pratices

image

So how to fix this when we have our clients on one side, the infrastructure in another and the Office365 in a different region ? Seperated with long miles and still try to deliver the best experience for the end-user, so In some case we need to compromise to be able to deliver the best user experience. Because that should be our end goal Deliver the best user experience

image

User Access

First of is, do we need to have federation or just plain password sync in place? Using password sync is easy and simple to setup and does not require any extra infrastructure. We can also configure it to use Password hash sync which will allow Azure AD to do the authentication process. Problem with doing this is that we lose a lot of stuff which we might use on an on-premises solution

  • Audit policies
  • Existing MFA (If we use Azure AD as authentication point we need to use Azure MFA)
  • Delegated Access via Intune
  • Lockdown and password changes (Since we need change to be synced to Azure AD before the user changes will be taken into effect)

NOTE: Now since I am above average interested in Netscaler I wanted to include another sentence here, for those that don’t know is that Netscaler with AAA can in essence replace ADFS since Netscaler now supports SAML iDP. Some important issues to note is that Netscaler does not support • Single Logout profile; • Identity Provider Discovery profile from the SAML profiles. We can also use Netscaler Unified Gateway with SSO to Office365 with SAML. The setup guide can be found here

https://msandbu.wordpress.com/2015/04/01/netscaler-and-office365-saml-idp-setup/

NOTE: We can also use Vmware Identity manager as an replacement to deliver SSO.

Using ADFS gives alot of advantages that password hash does not.

  • True SSO (While password hash gives Same Sign-on)
  • If we have Audit policies in place
  • Disabled users get locked out immidietly instead of 3 hours wait time until the Azure AD connect syng engine starts replicating, and 5 minutes for password changes.
  • If we have on-premises two-factor authentication we can most likely integrate it with ADFS but not if we have only password hash sync
  • Other security policies, like time of the day restrictions and so on.
  • Some licensing stuff requires federation

So to sum it up, please use federation

Initial Office configuration setup

Secondly, using the Office suite from Office365 uses something called Click-to-run, which is kinda an app-v wrapped Office package from Microsoft, which allows for easy updates from Microsoft directly instead of dabbling with the MSI installer.

In order to customize this installer we need to use the Office deployment toolkit which basically allows us to customize the deployment using an XML file.

The deployment tool has three switches that we can use.

setup.exe /download configuration.xml

setup.exe /configure configuration.xml

setup.exe /packager configuration.xml

NOTE: Using the /packager creates an App-V package of Office365 Click-To-run and requires a clean VM like we do when doing sequencing on App-V, which can then be distributed using existing App-V infrastructure or using other tools. But remember to enable scripting on the App-V client and do not alter the package using sequencing tool it is not supported.

The download part downloads Office based upon the configuration file here we can specify bit editions, versions number, office applications to be included and update path and so on. The Configuration XML file looks like this.

<Configuration>

<Add OfficeClientEdition=»64″ Branch=»Current»>

<Product ID=»O365ProPlusRetail»>

<Language ID=»en-us»/>

</Product>

</Add>

<Updates Enabled=»TRUE» Branch=»Business» UpdatePath=»\\server1\office365″ TargetVersion=»16.0.6366.2036″/>

<Display Level=»None» AcceptEULA=»TRUE»/>

</Configuration>

Now if you are like me and don’t remember all the different XML parameters you can use this site to customize your own XML file –> http://officedev.github.io/Office-IT-Pro-Deployment-Scripts/XmlEditor.html

When you are done configuring the XML file you can choose the export button to have the XML file downloaded.

If we have specified a specific Office version as part of the configuration.xml it will be downloaded to a seperate folder and storaged locally when we run the command setup.exe /download configuration.xml

NOTE: The different build numbers are available here –> http://support2.microsoft.com/gp/office-2013-365-update?

When we are done with the download of the click-to-run installer. We can change the configuration file to reflect the path of the office download

<Configuration> <Add SourcePath=»\\share\office» OfficeClientEdition=»32″ Branch=»Business»>

When we do the setup.exe /configure configuration.xml path

Deployment of Office

The main deployment is done using the setup.exe /configure configuration.xml file on the RSDH host. After the installation is complete

Shared Computer Support

<Display Level="None" AcceptEULA="True" /> 
<Property Name="SharedComputerLicensing" Value="1" />

In the configuration file we need to remember to enable SharedComputerSupport licensing or else we get this error message.

image

If you forgot you can also enable is using this registry key (just store it as an .reg file)

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\15.0\ClickToRun\Configuration]
«InstallationPath»=»C:\\Program Files\\Microsoft Office 15»
«SharedComputerLicensing»=»1

Now we are actually done with the golden image setup, don’t start the application yet if you want to use it for an image. Also make sure that there are no licenses installed on the host, which can be done using this tool.

cd ‘C:\Program Files (x86)\Microsoft Office\Office15’
cscript.exe .\OSPP.VBS /dstatus

image

This should be blank!

Another issue with this is that when a user starts an office app for the first time he/she needs to authenticate once, then a token will be stored locally on the %localappdata%\Microsoft\Office\15.0\Licensing folder, and will expire within a couple of days if the user is not active on the terminalserver. Think about it, if we have a large farm with many servers that might be the case and if a user is redirected to another server he/she will need to authenticate again. If the user is going against one server, the token will automatically refresh.
NOTE: This requires Internet access to work.

And important to remember that the Shared Computer support token is bound to the machine, so we cannot roam that token around computers or using any profile management tool.

But a nice thing is that if we have ADFS setup, we can setup Office365 to automatically activate against Office365, this is enabled by default. So no pesky logon screens.

Just need to add the ADFS domain site to trusted sites on Internet Explorer and define this settings as well

Automatic logon only in Intranet Zone

image

Which allows us to basically resolve the token issue with Shared Computer Support Smilefjes

Optimizing Skype for Buisness

So in regards to Skype for Buisness what options do we have in order to deliver a good user experience for it ? We have four options that I want to explore upon.

  • VDI plugin
  • Native RDP with UDP
  • Natnix PCoIP
  • Native ICA (w or without audio over UDP)
  • Local app access
  • HDX Optimization Pack 2.0

Now the issue with the first one (which is a Microsoft plugin is that it does not support Office365, it requires on-premises Lync/Skype) another issue that you cannot use VDI plugin and optimization pack at the same time, so if users are using VDI plugin and you want to switch to optimization pack you need to remove the VDI plugin

ICA uses TCP protcol works with most endpoints, since its basically running everyone directly on the server/vdi so the issue here is that we get no server offloading. So if we have 100 users running a video conference we might have a issue Smilefjes If the two other options are not available try to setup HDX realtime using audio over UDP for better audio performance. Both RDP and PCoIP use UDP for Audio/Video and therefore do not require any other specific customization.

But the problems with all these are that they make a tromboning effect and consumes more bandwidth and eats up the resources on the session host

image

Local App from Citrix access might be a viable option, which in essence means that a local application will be dragged into the receiver session, but this requires that the enduser has Lync/Skype installed. This also requires platinum licenses so not everyone has that + at it only supports Windows endpoints…

The last and most important piece is the HDX optimization pack which allows the use of server offloading using HDX media engine on the end user device

And the optimization pack supports Office365 with federated user and cloud only users. It also supports the latest clients (Skype for buisness) and can work in conjunction with Netscaler Gateway and Lync edge server for on-premises deployments. So means that we can get Mac/Linux/Windows users using server offloading, and with the latest release it also supports Office click-to-run and works with the native Skype UI

So using this feature we can offload the RSDH/VDI instances from CPU/Memory and eventually GPU directly back to the client. And Audio/video traffic is going to the endpoint directly and not to the remote session

image

Here is a simple test showing the difference between running Skype for buisness on a terminal server with and without HDX Optimization Pack 2.0

Permalink til innebygd bilde

Here is a complete blogpost on setting up HDX Optimization Pack 2.0 https://msandbu.wordpress.com/2016/01/02/citrix-hdx-optimization-pack-2-0/

Now for more of the this part, we also have Outlook. Which for many is quite the headache…. and that is most because of the OST files that is dropped in the %localappdata% folder for each user. Office ProPlus has a setting called fast access which means that Outlook will in most cases try to contact Office365 directly, but if the latency is becoming to high, the connection will drop and it will go and search trough the OST files.

Optimizing Outlook

Now this is the big elefant in the room and causes the most headaches. Since Outlook against Office365 can be setup in two modes either using Cached mode and the other using Online mode. Online modes uses direct access to Office365 but users loose features like instant search and such. In order to deliver a good user experience we need to compromise, the general guideline here is to configure cached mode with 3 months, and define to store the OST file (Which contains the emails, calender, etc) and is typically 60-80% than the email folder) on a network share. Since these OST files are by default created in the local appdata profile and using streaming profile management solutions aren’t typically a good fit for the OST file.

. Important to note that Microsoft supports having OST files on a network share, IF! there is adequate bandwidth and low latency… and only if there is one OST file and the users have Outlook 2010 SP1

NOTE: We can use other alternatives such as FSLogix, Unidesk to fix the Profile management in a better way.

Ill come back to the configuration part later in the Policy bits. And important to remember is to use Office Outlook over 2013 SP1 which gives MAPI over HTTP, instead of RCP over HTTP which does not consume that much bandwidth.

OneDrive

In regards to OneDrive try to exclude that from RSDH/VDI instances since the sync engine basically doesnt work very well and now that each user has 1 TB of storagee space, it will flood the storage quicker then anything else, if users are allowed to use it. Also there is no central management capabilities and network shares are not supported.

There are some changes in the upcoming unified client, in terms of deployment and management but still not a good solution.

You can remove it from the Office365 deployment by adding  this in the configuration file.

<ExcludeApp ID=»Groove» />

Optimization and group policy tuning

Now something that should be noted is that before installing Office365 click-to-run you should optimize the RSDH sessions hosts or the VDI instance. A blogpost which was published by Citrix noted a 20% in performance after some simple RSDH optimization was done.

Both Vmware and Citrix have free tools which allow to do RSDH/VDI Optimization which should be looked at before doing anything else.

Now the rest is mostly doing Group Policy tuning. Firstly we need to download the ADMX templates from Microsoft (either 2013 or 2016) then we need to add them to the central store.

We can then use Group Policy to manage the specific applications and how they behave. Another thing to think about is using Target Version group policy to manage which specific build we want to be on so we don’t have a new build each time Microsoft rolls-out a new version, because from experience I can tell that some new builds include new bugs –> https://msandbu.wordpress.com/2015/03/09/trouble-with-office365-shared-computer-support-on-february-and-december-builds/

image

Now the most important policies are stored in the computer configuration.

Computer Configuration –> Policies –> Administrative Templates –> Microsoft Office 2013 –> Updates

Here there are a few settings we should change to manage updates.

  • Enable Automatic Updates
  • Enable Automatic Upgrades
  • Hide Option to enable or disable updates
  • Update Path
  • Update Deadline
  • Target Version

These control how we do updates, we can specify enable automatic updates, without a update path and a target version, which will essentually make Office auto update to the latest version from Microsoft office. Or we can specify an update path (to a network share were we have downloaded a specific version) specify a target version) and do enable automatic updates and define a baseline) for a a specific OU for instance, this will trigger an update using a built-in task schedulerer which is added with Office, when the deadline is approaching Office has built in triggers to notify end users of the deployment. So using these policies we can have multiple deployment to specific users/computers. Some with the latest version and some using a specific version.

Next thing is for Remote Desktop Services only, if we are using pure RDS to make sure that we have an optimized setup.  NOTE: Do not touch if everything is working as intended.

Computer Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Remote Session Enviroment

  • Limit maximum color depth (Set to16-bits) less data across the wire)
  • Configure compression for RemoteFX data (set to bandwidth optimized)
  • Configure RemoteFX Adaptive Graphics ( set to bandwidth optimized)

Next there are more Office specific policies to make sure that we disable all the stuff we don’t need.

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Miscellaneous

  • Do not use hardware graphics acceleration
  • Disable Office animations
  • Disable Office backgrounds
  • Disable the Office start screen
  • Supress the recommended settings dialog

User Configuration –> Administrative Templates  –>Microsoft Office 2013 –> Global Options –> Customizehide

  • Menu animations (disabled!)

Next is under

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> First Run

  • Disable First Run Movie
  • Disable Office First Run Movie on application boot

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Subscription Activation

  • Automatically activate Office with federated organization credentials

Last but not least, define Cached mode for Outlook

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Account Settings –> Exchange –> Cached Exchange Modes

  • Cached Exchange Mode (File | Cached Exchange Mode)
  • Cached Exchange Mode Sync Settings (3 months)

Then specify the location of the OST files, which of course is somewhere else

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Miscellanous –> PST Settings

  • Default Location for OST files (Change this to a network share

Network and bandwidth tips

Something that you need to be aware of this the bandwidth usage of Office in a terminal server enviroment.

Average latency to Office is 50 – 70 MS

• 2000 «Heavy» users using Online mode in Outlook
About 20 mbps at peak

• 2000 «Heavy» users using Cached mode in Outlook
About 10 mbps at peak

• 2000 «Heavy» users using audio calls in Lync About 110 mbps at peak

• 2000 «Heavy» users working Office using RDP About 180 mbps at peak

Which means using for instance HDX optimization pack for 2000 users might “remove” 110 mbps of bandwidth usage.

Microsoft also has an application called Office365 client analyzer, which can give us a baseline to see how our network is against Office365, such as DNS, Latency to Office365 and such. And DNS is quite important in Office365 because Microsoft uses proximity based load balancing and if your DNS server is located elsewhere then your clients you might be sent in the wrong direction. The client analyzer can give you that information.

image

(We could however buy ExpressRoute from Microsoft which would give us low-latency connections directly to their datacenters, but this is only suiteable for LARGER enterprises, since it costs HIGH amounts of $$)

image

But this is for the larger enterprises which allows them to overcome the basic limitations of TCP stack which allow for limited amount of external connection to about 4000 connections at the same time. (One external NAT can support about 4,000 connections, given that Outlook consumes about 4 concurrent connections and Lync some as well)

Because Microsoft recommands that in a online scenario that the clients does not have more then 110 MS latency to Office365, and in my case I have about 60 – 70 MS latency. If we combine that with some packet loss or adjusted MTU well you get the picture Smilefjes 

Using Outlook Online mode, we should have a MAX latency of 110 MS above that will decline the user experience. Another thing is that using online mode disables instant search. We can use the exchange traffic excel calculator from Microsoft to calculate the amount of bandwidth requirements.

Some rule of thumbs, do some calculations! Use the bandwidth calculators for Lync/Exchange which might point you in the right direction. We can also use WAN accelerators (w/caching) for instance which might also lighten the burden on the bandwidth usage. You also need to think about the bandwidth usage if you are allow automatic updates enabled in your enviroment.

Troubleshooting tips

As the last part of this LOOONG post I have some general tips on using Office in a virtual enviroment. This is just gonna be a long list of different tips

  • For Hyper-V deployments, check VMQ and latest NIC drivers
  • 32-bits Office C2R typically works better then 64-bits
  • Antivirus ? Make Exceptions!
  • Remove Office products that you don’t need from the configuration, since this add extra traffic when doing downloads and more stuff added to the virtual machines
  • If you don’t use lync and audio service (disable the audio service! )
  • If using RDSH (Check the Group policy settings I recommended above)
  • If using Citrix or VMware (Make sure to tune the polices for an optimal experience, and using the RSDH/VDI optimization tools from the different vendors)
  • If Outlook is sluggish, check that you have adequate storage I/O to the network share (NO HIGH BANDWIDTH IS NOT ENOUGH IF STORED ON A SIMPLE RAID WITH 10k disks)
  • If all else failes on Outlook (Disable MAPI over HTTP) In some cases when getting new mail takes a long time try to disable this, used to be a known error)

Remote display protocols

Last but not least I want to mention this briefly, if you are setting up a new solution and thinking about choosing one vendor over the other. The first of is

  • Endpoint requirements (Thin clients, Windows, Mac, Linux)
  • Requirements in terms of GPU, Mobile workers etc)

Now we have done some tests, which shown the Citrix has the best feature across the different sub protocols

  • ThinWire (Best across high latency lines, using TCP works over 1800 MS Latency)
  • Framehawk (Work good at 20% packet loss lines)

While PcoIP performs a bit better then RDP, I have another blogpost on the subject here –> https://msandbu.wordpress.com/2015/11/06/putting-thinwire-and-framehawk-to-the-test/

#citrix, #fcbars, #garyspants, #lcfc, #netscaler, #office365, #punchflix, #slbbvb, #xendesktop

Deep dive Framehawk (From a networking perspective)

Well Citrix has released Framehawk with support for both enterprise WLAN and remote access using Netscaler. In order to setup Framehawk for remote access you need to basically one thing (Enable DTLS) and of course SSL certificate rebound) DTLS is an TLS extenstion on UDP. So basically means that Framehawk is a UDP protocol. So unlike RemoteFX where Microsoft uses TCP/UDP both in a remote session, which means that it uses UDP for graphics and TCP for keystrokes and such.

So what does a Framehawk connection looks like?

image

External, a client uses DTLS connection to the Netscaler and then the Netscaler will use a UDP connection to the VDA in the backend. The DTLS connections has its own sequence number which is used to keeping track of connections.

image

There are some issues that you need be aware of before setting up Framehawk.image

Also some other notes which are important to take note of, and that Framehawk will not work properly in a VPN connection, since most VPN solutions will wrap packets inside a TCP layer or GRE tunnel which means that the UDP connection will not function as intended.

image

Now Framehawk is not designed for low bandwidth connections, it requires more bandwidth use then ThinWire so why is that ?

“For optimal performance, our initial recommendation is a base of 4 or 5 Mbps plus about 150 Kbps per concurrent user. But having said that, you will likely find that Framehawk greatly outperforms Thinwire on a 2 Mbps VSAT satellite connection because of the combination of packet loss and high latency.”

The reason for that is that TCP will try to retransmit packets which are dropped, while UDP which is a much simple protocol without connection setup delays, flow control, and retransmission. And in order to ensure that all mouseclick, keyboard clicks are successfully delivered Framehawk requires more bandwidth since UDP is stateless and there is no guarantee that packets are successfully deliver, I belive that the framehawk component of Citrix Receiver has its own “click” tracker which ensures that clicks are successfully delivered and to ensure that it requires more bandwidth.

Comments from Citrix: 

While SSL VPNs or any other TCP-based tunnelling like SSH re-direction will definitely cause performance problems for the Framehawk protocol, anything that works at the IP layer like GRE or IKE/IPSec will work well with it. We’ve designed the protocol to maintain headroom for two extra layers of encapsulation, so you can even multiple-wrap it without effect. Do keep in mind that right now it won’t do well with any more layers since it can cause fragmentation of the enclosed UDP packets which will effect performance on some networks.

2) While it’s based entirely on UDP the Framehawk protocol does have the ability to send some or all data in a fully reliable and sequenced format. That’s what we’re using the keyboard/mouse/touch input channels. Anything that has to pass from the client to the server in a reliable way (such as keystrokes, mouse clicks and touch up/down events) will always do so inside of the procotol. You should never see loss of these events on the server, even at extremely high loss.

And one last comment for anyone else reading this: The Framehawk protocol is specifically designed for improving the user experience on networks with a high BDP (bandwidth delay product) and random loss. In normal LAN/MAN/WAN networks with either no or predominantly congestive loss and low BDP, Framehawk will basically start acting like TCP and start throttling itself if it does run into congestion. At some point, however, the techniques it uses have a minimal amount of bandwidth (which is hard to describe since we composite multiple workloads on differnt parts of the screen). In those cases other techniques would be needed, like Thinwire Advanced. As we move down the road with our integration into existing Citrix products and start leveraging our network optimizations with bandwidth optimized protocols like Thinwire and Thinware Advanced expect that to just get better!

#citrix, #framehawk

Nutanix and Citrix–Better together

Now Citrix has for a long time, support for most of the different hypervisors. Meaning that customers gets the flexibility to choose a number of different hypervisors if they are planning to use XenApp/XenDesktop. This support is also included for Netscaler as well.

So as of today, Citrix supports XenServer, Hyper-V, Vmware, Amazon, Cloudplatform. As well as Azure support is on the way. Meanwhile a month back, Citrix announced a partnership with Nutanix, and stating that Acropolis Hypervisor was Citrix ready for XenApp/XenDesktop and Netscaler and Sharefile as well. This means that the customers will get a better integration between the hypervisor as well as support for the product on the Nutanix Hypervisor.

Kees Baggerman from Nutanix posted this teaser on his website. Of how the integration might look like http://blog.myvirtualvision.com/2015/08/31/citrix-launches-cwc-whats-in-it-for-you/

Unknown

Now this is mostly focused on the Citrix Workspace Cloud, but also stated that this is coming for tradisional on-premises XenApp/XenDesktop as well

image

Also looking forward to the deeper integration with for instance Machine Creation Service with for instance Shadow Clones on the Acropolis Hypervisor!

#citrix, #nutanix

Optimizing web content with Citrix Netscaler

This post, is based upon a session I had for a partner in Norway. How can we use Netscaler to optimize web content?

Let’s face it, the trends are chaging

  • Users are becoming less patient (meaging that they demand that applications/services respond quicker. (more then 40% of users drop out if the website takes mroe then 5 – 10 seconds to load) think about how that can affect a WebShop or eCommerce site ?
  • More and more mobile traffic (Mobile phones, ipads, laptops. Communicating using 3G/4G or WLAN for that matter) and to that we can add that there is more data being sent across the network as well. Site web applications become more and more complex, with more code and more components as well.
  • More demands to availability (Users are demaing that services are available at almost every hour. If we think about it about 5 – 10 years ago, if something was down for about 10 min we didn’t think that much about it, but now ?
  • More demands to have secure communication. It wasn’t that long ago that Facebook and Google switched to SSL as default when using their services. With more and more hacking attempts happening online it requires a certain amount of security.

So what can Netscaler do in this equation ?

  • Optimizing content with Front-end optimization, Caching and Compression

With the latest 10.5 release, Citrix has made a good jump into web content optimization. With features like lazy loading of images, HTML comment removal, minify JS and inline CSS.  And adding it that after content is being optimized the content can be compressed using gZIP or DEFLATE and sent across the wire (NOTE: that most web servers like Apache and IIS support GZIP and Deflate but it is much more efficient to do this on a dedicated ADC)

And with using Caching to store often accessed data it makes the Netscaler a good web optimization platform.

  • Optimizing based upon endpoints.

With the current trend and more users connecting using mobile devices which are using the internett with a wireless conenction. If needs a better way to communicate as well. A god example here is TCP congeston. On a wireless you have a higher amount of packet loss and this requires using for instance TCP Congestion Westwood which is much better suites on wireless connections. Also using features like MTCP (on supported devices) allows for higher troughput as well. And that we can place different TCP settings on different services makes it much more agile.

  • High availability

Using features like load balancing and GSLB allows us to deliver a high availabilty and scale solution. And using features like AppQOE to allows us to prioritize traffic in a eCommerce setting might be a valuable asset. Think the scenario if we have a web shop, where most of our buying customers come from a regular PC while most mobile users that are connecting are mostly checking the latest offers. If we ever where to reach our peak in traffic it is useful to prioritize traffic based upon endpoint connecting.

  • Secure content

With Netscaler it allows us to create specific SSL profile which we can attack to different services. For instance older applications which are used by everyone might not have the high requirement regarding security, but on the other hand PCI-DSS requires a high level of security. Add to the mix that we can handle many common DDoS attacks on TCP level and on HTTP. We can also use Application firewall which handles many application based attacks, when an own learning feature it can block users which are not following the common user pattern on a website. And that we can specify common URLs which users are not allowed to access.

So to summerize, the Netscaler can be a good component to optimizing and securing traffic, with a lot of exiting stuff happening in the next year! Smilefjes stay tuned.

#citrix, #netscaler

Setting up a secure XenApp enviroment–Storefront

So this is part two of my securing XenApp enviroment, this time I’ve moved my focus to Storefront. Now how does Storefront need to be secured ?

In most cases, Storefront is the aggregator that allows clients to connect to a citrix infrastructure. Im most cases the Storefront is located on the internal network and the Netscaler is placed in DMZ. Even if Storefront is located on the internal network and the firewall and Netscaler does alot of the security work, there are still things that need to be take care of on the Storefront.

In many cases many users also connect to the Storefront directly if they are connected to the internal network. Then they are just bypassing the Netscaler. But since Storefront is a Windows Server there are alot of things to think about.

So where to begin.

1: Setting up a base URL with a HTTPS certificate (if you are using a internal signed certificate make sure that you have a proper set up Root CA which in most cases should be offline. Or that you have a public signed third party CA. Which also in many cases is useful because if users are connecting directly to Storefront their computers might not regonize the internally signed CA.

image

2: Remove the HTTP binding on the IIS site. To avoid HTTP requests.

Use a tool like IIS crypto to disable the use of older SSL protocols on IIS server and older RC ciphers

image

You can also define ICA file signing. This allows for Citrix Receiver clients which support signed ICA files to verify that the ICA fiels they get comes from a verified source.  http://support.citrix.com/proddocs/topic/dws-storefront-25/dws-configure-conf-ica.html

3: We can also setup so that Citrix Receiver is unable to caching password, this can be done by changing authenticate.aspx under C:\inetpub\wwwroot\Citrix\Authentication\Views\ExplicitForms\

and you change the following parameter

<% Html.RenderPartial(«SaveCredentialsRequirement»,
              SaveCredentials); %>

<%– Html.RenderPartial(«SaveCredentialsRequirement»,
                SaveCredentials); –%>

4: Force ICA connections to go trough Netscaler using Optimal Gateway feature of Storefront –> http://support.citrix.com/article/CTX200129 using this option will also allow you to use Insight to monitor clients connection to Citrix as well, and depending on the Netscaler version give you some historical data.

And with using Windows pass-trough you can have Kerberos authenticating to the Storefront and then have ICA sessions go trough the Netscaler –> http://support.citrix.com/article/CTX133982

5: Use SSL in communication with the delivery controllers –> http://support.citrix.com/proddocs/topic/xendesktop-7/cds-mng-cntrlr-ssl.html

6: Install Dynamic IP restrictions on the IIS server, this stops DDoS happning against Storefront from the same IP-address

 IIS fig4

7: Windows updated!  and Antivirus software running (Note that having Windows updated, having some sort of antivirus running with limited access to the server) also let the Windows Firewall keep runnign and only open the necessery ports to allow communication with AD, Delivery Controllers and with Netscaler.

8: Define audit policies to log (Credential validation, Remote Desktop connections, terminal logons and so on) https://technet.microsoft.com/en-us/library/dn319056.aspx

9: Use the Storefront Web Config GUI from Citrix to define lockout and session timeout values

image

10: Use a tool like Operations Manager with for instance ComTrade to monitor the Storefront Instances. Or just the IIS management pack for IIS, this gives some good insight on how the IIS server is operating.

11: Make sure that full logging is enabled on the IIS server site.

IIS Logging Configuration for System Center Advisor Log Management

Stay tuned for more, next part is the delivery controllers and the VDA agents.

#citrix, #storefront, #xenapp

Setting up a secure XenApp enviroment– Netscaler

Now I had the pleasure of talking PCI-DSS compliant XenApp enviroment for a customer. Now after working with it for the last couple of days there are lot of usefull information that I thought I would share.

Now PCI-DSS compliance is needed for any merchant who accepts credit cards for instance an e-commerce size. Or using some sort of application. So this includes all sorts of

  • Different procedures for data shredding and logging
  • Access control
  • Logging and authorization

Now the current PCI-DSS standard is in version 3 –> https://www.pcisecuritystandards.org/documents/PCI_DSS_v3.pdf

The different requirements and assesment procedures can be found in this document. Now Citrix has also created a document for how to setup a compliant XenApp enviroment https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/pci-dss-success-achieving-compliance-and-increasing-web-application-availability.pdf you can also find some more information here –> http://www.citrix.com/about/legal/security-compliance/security-standards.html

Now instead of making this post a pure PCI-DSS post I decided to do a more “howto secure yout XenApp enviroment” and what kind of options we have and where a weakness might be.

Now a typical enviroment might looks like this.

image

So let’s start by exploring the first part of the Citrix infrastructure which is the Netscaler, in a typical enviroment it might be located in the DMZ. Where the front-end firewall has statefull packet inspection to see what traffic goes back and forth. The best way to do a secure setup of Netscaler is one-armed mode and use routing to backend resources and then have another Firewall in between to do deep packet inspection.

First thing we need to do with Netscaler when setting up Netscaler Gateway for instance is to disable SSL 3.0 and default (We should have MPX do to TLS 1.1 and TLS 1.2 but with VPX we are limited to TLS 1.0

Also important to remember th use TRUSTED third party certificates from known vendors, without any known history. Try to avoid SHA-1 based certificates, Citrix now supports SHA256.

Important to setup secure access to management only (since it by default uses http)

image

This can be done by using SSL profiles which can be attached to the Netscaler Gateway

image

Also define NONSECURE SSL renegotiation. Also we need to define some TCP parameters. Firstly make sure that TCP SYN Cookie is enabled, this allows for protection against SYN flood attacks and that SYN Spoof Protection is enabled to protect against spoofed SYN packets.

image

Under HTTP profiles make sure that the Netscaler drops invalid HTTP requests

image

Make sure that ICA proxy migration is enabled, this makes sure that there is only 1 session at a time established for a user via the Netscaler

image

Double hop can also be an option if we have multiple DMZ sones or a private and internal zone.

Specify a max login attempts and a timeout value, to make sure that your services aren’t being hammered by a dictonary attack

image

Change the password for the nsuser!!!

image

Use an encrypted NTP source which allows for timestamping when logging. (Running at version 4 and above) and also verify that the timezones are running correctly.

image

Sett up a SNMP monitoring based solution or Command Center to get monitoring information from Netscaler, or use a Syslog as well to get more detailed information. Note that you should use SNMP v3 which gives both Authentication and encryption.

Use LDAPS based authetication against the local active directory server, since LDAP is pure-text based, and use TLS not SSL, and make sure that the Netscaler verifies the server certificate on the LDAP server

image

It also helps to setup two-factor authentication to provide better protection against user thefts. Make sure that if you are using a two factor authentication vendor that it uses CHAP authentication protocol instead of PAP. Since CHAP is much more secure authentication protocol then PAP

Use NetProfiles to control traffic flow from a particular SNIP to backend resources (This allows for easier management when setting up firewall rules for Access.

image

Enable ARP spoof validation, so we don’t have any forging ARP requests where the Netscaler is placed (DMZ Zone)

image

Use a DNSSEC based DNS server, this allows for signed and validated responses. This way you cannot its difficult to hijack a DNS or do MITM on DNS queries.  Note that this requires that you add a nameserver with both TCP and UDP enabled. (Netscaler can function as both a DNSSEC enabled authoritative DNS server and proxy mode for DNSSEC)

If you wish to use Netscaler as an VPN access towards the first DMZ zone, first things you need to do is

1: Update the SWOT library

image

Create a preauthetnication policy to check for updated antivirus software

image

Same goes for Patch updates

image

In most cases try to use the latest firmware, Citrix does release a new Netscaler firmware atleast one every three months which contains bug fixes and security patches as well.

Do not activate enhanced authentication feedback, this enabled hackers to learn more about lockout policies and or if the user is non existant or locked out, disabled and so on.

image

Set up STA communication using HTTPS (Which requires a valid certificate and that Netscaler trusts the root CA) You also need to setup Storefront using a valid certificate from a trusted Root CA. This should not be a internal PKI root CA since third party vendors have a much higher form a physical security.

If you for some reason cannot use SSL/TLS based communication with backend resources you can use MACSec which is a layer 2 feature which allows for encrypted traffic between nodes on ethernet.

#citrix, #netscaler, #xenapp

Citrix XenMobile and Microsoft Cloud happily ever after ?

There is no denying that Microsoft is moving more and more focus into their cloud offerings, even with solution such as Office365, EMS (Enterprise Mobility Suite) and of course their Azure platform.

EMS being the latest product bundle in the suite gives customers Intune, Azure Rights Management and Azure Active Directory Premium. So if a customer already has Office365 (their users are already placed with Azure AD and can then easily be attached to EMS for more features)

We are also seeing that Microsoft is adding more and more management capabilities against Office365 into their Intune suite (Which is one of the keypoints which no other vendors have yet) but is this type of management something we need ? or is it just to give it a “key” selling point?

Now Microsoft has added alot of MDM capabilities to Intune, but they are nowhere close to the competition yet. Of course they have other offerings in the EMS pack, like Azure Rights Management, which are quite unique on the way it functions and integrates with Azure AD and Office365. As of 2014 Microsoft isn’t even listed on the Gartner quadrant for EMM (which they stated would be the goal for 2015)

But it will be interesting to se if Microsoft’s strategy is to compete head-to-head on the other vendors or if they wish to give the basic features and dvelve more into the part of Azure AD and identity management across clouds and SaaS offerings.

Citrix on the other hand, have their XenMobile offering which is a more complete EMM product suite (MDM and MAM, Follow me data with Sharefile, and so on) Now Citrix has a lot of advantages for instance over using Sharefile against OneDrive.  Sharefile has encryption of data even thou it is locally and running on a sandboxed application( on a mobile device), while the only option that OneDrive has is using as a part of Rights Management Service (of course OneDrive has extensive data encryption in-transit and at rest https://technet.microsoft.com/en-us/library/dn905447.aspx

Citrix also has MicroVPN functionality and secure browser access running VPN access using Netscaler, while Microsoft also has a secure browser application which is much more limited to restricting which URLs to open and what content can be viewed from that browser.

So from a customer side you need to ask yourself.

  • what kind of requirement does my buisness have?
  • Do I use Office365 or a regualr on-premise setup?
  • Do I need the advanced capabilities ?
  • How are my users actually working ?

Is there a best of both worlds using both of these technologies ?

While yes!

Now of course there are some feature that overlaps using Offic365 and EMS + XenMobile, but there are also some features which are important to be aware of.

  • Citrix has Sharefile storage controller templates in Azure (Meaning that if a customer has an IaaS in Azure, they can setup a Sharefile connector in Azure and use that to publish files and content without using OneDrive)
  • Citrix has a Sharefile connector to Office365 (Which allows users to use Sharefile almost as a file aggregrator for communicating between Office365 and their regular fileservers) which allows for secure editing directly from ShareFile.
  • Citrix XenMobile has alot better MDM features for Windows Phone that Intune has at the moment.
  • Azure AAD has a lot of built-in SSO access to many of Citrix web based applications (Sharefile, GTM, GTA and so on) since users are already in Azure AD premium it can be used to grant access to the different applications using SSO)
  • Netscaler and SAML iDP (If we have an on-premise enterprise solution we can use the Netscaler to operate as an SAML identity provider against Office365 which allows for replacement for ADFS which is required for full SSO of on-premise AD users to Office365
  • Office365 ProPlus with Lync is supported on XenApp/XD with Lync optimization pack (Note that this is not part of XenMobile but of Workspace suite)
  • Netscaler and Azure MFA (We can use Azure MFA against Netscaler to publish web based applications with traffic optimization)
  • Netscaler will also soon be available in Azure which allows for setting up a full Citrix infrastructure in Azure

But in the future I would be guessing that Microsoft is moving forward with the user collaboration part, it is going to become the heart of identity management with Azure AD directory and rights management, while Citrix on the other hand will focus more and enabling mobility using solutions like EMM ( MAM ) and follow me data aggregator and secure file access and devices. Citrix will also play an important part in hybrid setup using Netscaler with Cloud bridge and as an identity provider on-premise

#citrix, #ems, #intune, #microsoft, #office365, #xenmobile

“New” Netscaler book project in the making

Now the last couple of months I’ve again been involved with a Netscaler book project with Packt. This is a more advanced book then my previous book with was a more introduction to Citrix Netscaler.

This new book is called Mastering Netscaler which has more in-depth information regarding load balancing, appfirewall and such.

But… I kinda feel that this book just covers a fragement on what users want to read about when they buy a book about Netscaler.

Therefore in order to get things right, I was thinking about creating a third book about Netscaler which covers all the subjects, stuff you want to read about. Therefore this post is merely for you to give feedback to me Smilefjes

If you could please give me a few senteces about what YOU would want to include in a Netscaler book ? Please drop a comment below this post.

and if you are willing to help me form and maybe contribute to the outline and possibly help me write the book as well that would be great, just send me email to msandbu@gmail.com

#citrix, #netscaler

Upcoming events and stuff

There’s alot happening lately and therefore there has been a bit quiet here on this blog. But to give a quick update on what’s happening!

In february I just recently got confirmation that I am presenting two session at NIC conference (Which is the largest IT event for IT-pros in scandinavia) (nicconf.com) Here I will be presenting 2 (maybe 3) sessions.

  • Setting up and deploying Microsoft Azure RemoteApp
  • Delivering high-end graphics using Citrix, Microsoft and VMware

One session will be primarly focused on Microsoft Azure RemoteApp where I will be showing how to setup RemoteApp in both Cloud and Hybrid and talk a little bit about what kind of use cases it has. The second session will focus on delivering high-end graphics and 3d applications using RemoteFX (using vNext Windows Server), HDX and PCoIP and talk and demo abit about how it works, pros and cons, VDI or RDS and endpoints so my main objective is to talk about how to deliver applications and desktops from cloud and on-premise…

And on the other end, I have just signed a contract with Packt Publishing to write another book on Netscaler, “Mastering Netscaler VPX” which will be kind of a follow up of my existing book http://www.amazon.co.uk/Implementing-Netscaler-Vpx-Marius-Sandbu/dp/178217267X/ref=sr_1_1?ie=UTF8&qid=1417546291&sr=8-1&keywords=netscaler

Which will focus more in depth of the different subjects and focused on 10.5 features as well.

I am also involved with a community project I started, which is a free eBook about Microsoft Azure IaaS where I have some very skilled norwegians with me to write this subject. Takes some time since Microsoft is always adding new content there which needs to be added to the eBook as well.

So alot is happening! more blogsposts coming around Azure and Cloudbridge.

#azure, #citrix, #microsoft, #netscaler, #vmware

Using Netscaler with UPN and Storefront

Had a case earlier today where a customer wanted to configure Netscaler to authenticate with UPN instead of SamAccountName. And using UPN instead of SamAccountName makes sense in many cases, since it easier for users to remember their email-address instead of their username.  So in this scenario my samAccoutName is msandbu and my UPN is marius.sandbu@demo.no

Now by default Netscaler is setup with samAccoutName under server logon name attribute. This defines what kind of account name you are allowed to logon with using Netscaler.

If you try to logon with UPN when SamAccountName is defined you will get this kind of error message on the StoreFront Server.

image

So Storefront strips the domain info sent from the Netscaler and tries to validate the credentials to Active Directory.

So how to fix this ?

You have to define the SSO name attribute in the LDAP credential, to samAccountName.

image

Then the Netscaler firstly validates the UPN, get the SamAccountName of the user and then forwards that to Storefront and logs in.

Important to remember that Storefront always tried to revalidate the info from Netscaler

image

#citrix, #gameslotpulsaonline, #netscaler, #storefront