Putting ThinWire and Framehawk to the test!

Framehawk and Thinwire – It’s all about the numbers

Recently me and Mikael @mikael_modin attended a Citrix User Group Conference in Norway, where Mikael held a session regarding when and when to use Framehawk, you can read his entire blogpost here –> http://bit.ly/1PV3104 and I have already done some details regarding Framehawk from a networking perspective.

The main point in Mikael’s presentation was that although using Framehawk in situations when packet loss is tremendously better, Thinwire Advance will often be “enough” or even more useful when there is only latency involved. This is because of the use of CPU, RAM and most of all bandwidth.
Another thing he pointed out was that Framehawk needs “a lot” of bandwidth to be at its best.
The recommendations for Thinwire is a minimum of 1,5MBps + 150kbps per user while recommendations for Framehawk is a minimum of 4-5Mbps + 150kbps per user.

There is a lot of naming conventions when it comes to Thinwire. Although we can see Thinwire as one protocol, there are different versions of it.
Thinwire is all about compressing data before sending it. The methods for this are:

· Legacy Thinwire (Pre win8 / Server 2012R2)

· Thinwire Compatibility Mode (New with FP3, also known as Thinwire +, Win8 / Server 2012R2 and later. This version takes advantage of how new operating systems constructs the graphics.
For more info read the following blog post written by Muhammad Dawood http://bit.ly/WEnSDN

· Thinwire Advance (uses H.264 to compress the data)

For a more detailed overview when to use each technology, you can refer to the following table:


When we came back home we decided to take a closer look at what impact had on CPU, RAM and bandwidth Thinwire or Framehawk had and we have found some very interesting data.

Our tests includes the following user workload;

· Logging in and waiting 1 minute for the uberagent to gather data and getting the session up and ready.

· Open a PDF file, scrolling up and down for 1 minute. (The PDF is located locally on the VM to exclude network I/O)

· Connect to a webpage www.vg.no, which is a Norwegian newspaper which contains a lot of different objects and high graphics, and scrolling up and down for a 1 minute. 

· We then open Microsoft Word and type randomly for 1 minute.

· Last but not least our favorite opening of the Avengers trailer in fullscreen using Chrome for the full duration of 2 minutes.

This allows us to see which workloads generate how much bandwidth, CPU- and RAM usage with each of the different protocols.

To collect and analyze the data we were using the following tools

· Splunk – Uberagent (Get info we didn’t even think was possible!)

· Netbalancer (Show bandwidth, set packet loss, define bandwidth limits and define latency)

· Citrix Director

– Displaystatus (to verify the protocol status)

The sample video below shows how the tests is being run. This allows us to closer analyze the sample data from Netbalancer as well.

NOTE: During the testing there might be some slight alterations from test to test since this not an automated test but running as an typical enduser experience, but these were so minor that we can conclude that the numbers are within +/-5%

We had two Windows 10 VDI running the latest release of XenDesktop 7.6 FP3 during the testing phase.

· MCS1002 is for the test02 user, which is not using Framehawk

· MCS1003 is for the test01 user, which has Framehawk, enabled using policies

· Use of Codec were deactivated through policy to ensure that Thinwire was used

The internett connection is a solid 100 MBps, the average connection to the Citrix enviroment is about 10 – 20 MS latency.

The sample video in this URL https://www.youtube.com/watch?v=F89eQPd7shs shows how the tests is being run. This allows us to closer analyze the sample data from Netbalancer as well.

Some notes so far: Some Framehawk sessions get stuck on the Netscaler, we can see existing connections not being dropped correctly, we can see this in the Netscaler GUI under Gateway –> DTLS sessions

After we changed the TCP profiles on the Netscaler we were unable to use Framehawk.
We then needed to reconfigure the DTLS and Certificate settings on the vServer and setup a new connection and Framehawk worked again as expected.

So after the initial run, we can note the following from the Netbalancer data;

We begin with looking at how Framehawk handles bandwidth.

We can see that the total session, which was about 7 minutes, Framehawk uses about 240 MBs of bandwidth to be able to deliver the graphics.
However, it was during the PDF and Webpage part of the test which really pushed it in terms of bandwidth, not the Youtube trailer.


Thinwire on the other hand, used only 47 MBs of bandwidth, and like we would expect more data was being used when showing the trailer than the PDF- and webpage section.


Using Splunk we care able to get a closer look at the Framehawk numbers.
Average CPU usage for the VDA agent was close up to 16% on average.


While using ThinWire the CPU usage was only 6% on average.


But the maximum amount of CPU usage came from Framehawk, which was close to 50% CPU usage at one point.


While ThinWire on the other hand, was only up to 18%


We can conclude that Framehawk uses much more CPU cycles in order to process the bandwidth, but from our testing we could see that the PDF part which generated a lot more traffic, allowed for a much more smooth experience. Not just from scrolling the document but also zooming in.

On the other side we can also see that Framehawk uses a bit more RAM then ThinWire does, about 400 MB was the maximum number


While Thinwire was about 300 MB


So this was the initial test, which shows that Thinwire uses less bandwidth, less memory and less CPU, but we can see that Framehawk on applications like PDF deliver a better user experience. So now, let us see how they fare when taking into account of latency and packet loss.

2% Packet loss

We started by testing Framehawk at 2% packet loss.
Looking at the bandwidth test we could see that is uses about 16 MB of bandwidth less with the packet loss. It’s still the PDF and Webpage that consumes the most resources, and now it is down to 224 MBs of bandwidth usage

The Maximum CPU usage peaked at 45%

And the average CPU usage was 19%

The amount of RAM used was a slight increase with 4MB






Now here comes the interesting part, using Thinwire at 2% packet loss, (up and down) will trigger a lot of TCP retransmissions because of the packet drops


(Remember that this is using an optimized Netscaler) we can see that ThinWire uses only 12 MBs of bandwidth! This is because of the TCP retransmissions, it will never be able to send large enough packets before the packet loss occurs.

So with Thinwire and 2% packet loss we could see that the bandwidth usage dropped with about 59 MB when we had the packet loss. The maximum bandwidth used in this session was 12Mbps

The maximum was also 50% lower than the reference test and showed only 3%

The average CPU usage was now only 3% (that is 50% of the reference test)

The RAM usage was about 30MB more than earlier





5% Packet loss

At 5% packet loss we can see that is uses about 50 MB of bandwidth extra. It’s still the PDF and Webpage that consumes the most resources, but now it is up to 300 MBs of bandwidth

We can also see that from a resource perspective, it still uses almost the same amount of max CPU %, but this might vary from test to test, but it is close to the 50%)

On average CPU usage we can see that it went up 4% from the initial testing, which makes sense since it needs to send more network packets which uses CPU cycles.

The RAM usage is the same as with 2% packet loss





5% Packet loss

Looking at the bandwidth usage with 5% packet loss and use of Thinwire the number is slightly lower and now uses 11MB

This can also be seen in the CPU usage of the protocol, since the packet loss occurs, the VDA does not need to send so much packets and hence the CPU usage is lower and stops at 7%

Average CPU usage is now just under 3%

RAM however is a bit larger with 330MB





End-user perspective
From an end-user perspective we can safely say that Framehawk delivered a much better experience, if we tried to follow the test from minute to minute, the ThinWire test took about 40 seconds longer just because of the delay from a mouse click to occur and doing things like zooming into a PDF file took so much time that it caused the test to take a longer time to complete.

Winner: Framehawk!

10% Packet loss


With 10% packet loss, we could see that the bandwidth usage went down a bit. That might again be that the packet loss was so high that it was unable to process all the data and hence the total bandwidth usage was lower than it was with 5%, and with the decrease in bandwidth, we can also see the CPU usage go down as well.

The max CPU usage was about the same with 47%

The average CPU usage was 19%

The RAM usage is the same at 404 MB




10% Packet loss

With 10% packet loss Thinwire was down to 6MB and the CPU usage also reflected this by only use 4% at peak and 1.6 % at average
RAM usage was still about the same as earlier and peaked at 326MB





End-user perspective
What we noticed here is that most of the different graphic intensive testing became unresponsive and that the ICA connection froze. The only thing that was really workable was using Word. Opening the PDF, Webpage and youtube became so unresponsive that is was not really workable.

Winner: Framehawk!

CPU Stats on Framehawk and Thinwire
NOTE: We have taken multiple samples of the CPU statistics on the Netscaler so this screenshots represent the average number we saw.
What we can see is that a framehawk which uses more bandwidth also will increase the CPU usage on the packet engines. The Netscaler from an idle state uses about 0 – 1,5 % CPU, which can be seen here à


NOTE: This is a VPX 1000 with 2 vCPU (Where we have only 1 packet engine) starting an ICA proxy session with the defaults over thin wire and starting the process that generates the most bandwidth (PDF scrolling and zooming) the packet CPU rises to about <1%


So it’s a minor increase which is expected since ThinWire uses a small amount of bandwidth. Now Framehawk on the other hand will use about 4% of the packet engine CPU. Note again that this was when we kept working with the PDF documentet.
We can conclude that using Framehawk will put a lot more strain on the Netscaler packet engine and therefore we cannot have as many users on the Netscaler.


RDP usage:
We also wanted to give RDP a test under different scenarios. We have some issues fetching out CPU and memory usage since RDP uses DWM and MSTSC which can appear as a sub-process of svchost
We therefore skipped that part and only focused on the bandwidth usage and end-user experience.

First we started out with a test where we have no limitations in form of latency and packet loss (This was using regular RDP against a Windows 10 with TCP/UDP

The initial test shows as we expected, RDP uses 53 MB of bandwidth


We also noticed that under the YouTube part that the Progressive rendering engine kicked in order to ensure optimal delivery but the graphics was ok.

RDP, 2% Packet loss

With 2% Packet loss the bandwidth usage was basically half 26MB of bandwidth


Keystrokes and some operations was a bit delayed, but still workable, on the other hand the progressive rendering engine on the youtube part made the graphics nearly impossible to see what actually happened, even thou audio worked fine.

RDP 5% Packet loss

RDP used about 17MB of bandwidth PDF scrolling and zooming made a huge delay in how the end-user could work. Surfing on the webpage which has a huge amount of graphics, freezed up for a couple of seconds. Youtube itself, well it didn’t work very well.


We can conlude that RDP uses more bandwidth that Thinwire under normal circumstances, but when coming to packet loss it does not deal with that pretty well.

So what does all these data tell us?
We can clearly see that Framehawk and Thinwire has its own use cases.
While Thinwire is the preferred method of delivering graphics, even with high latency, as soon as we experience packet loss off 3% or higher, Framehawk will definitively give a better use experience. Just remember to keep an eye on the resource usage on the VDI.
Especially when using it with XenApp since a spike in the CPU usage will have a great impact on the users who are logged on and will decrease the numenbr of users you can have on each server.

Office365 together with Citrix

So this is a blogpost based upon a session I had at Citrix User Group here in Norway this week, which is essentially about can Office365 work in conjunction with Citrix ? and what do we need to think about ?

There are multiple stuff we need to think / worry about. Might seem a bit negative, but that is not the idea just being realistic Smilefjes

So this blogpost will cover the following subjects

  • Federation and sync
  • Optimizing Office ProPlus for VDI/RDS
  • Office ProPlus optimal delivery
    • Performance
    • Shared Computer Support
  • Skype for Buisness
  • Outlook
  • OneDrive

So what is the main issue with using Citrix and Office365? The Distance….

This is the headline for a blogpost on Citrix blogs


So how to fix this when we have our clients on one side, the infrastructure in another and the Office365 in a different region ? Seperated with long miles and still try to deliver the best experience for the end-user


First of is, do we need to have federation or just plain password sync in place? Using password sync is easy and simple to setup and does not require any extra infrastructure.

NOTE: Now since I am above average interested in Netscaler I wanted to include another sentence here, for those that don’t know is that Netscaler with AAA can in essence replace ADFS since Netscaler now supports SAML iDP. Some important issues to note is that Netscaler does not support • Single Logout profile; • Identity Provider Discovery profile from the SAML profiles. We can also use Netscaler Unified Gateway with SSO to Office365 with SAML. The setup guide can be found here


Using ADFS gives alot of advantages that password hash does not.

  • True SSO (While password hash gives Same Sign-on)
  • If we have Audit policies in place
  • Disabled users get locked out immidietly instead of 3 hours wait time until the Azure AD connect syng engine starts replicating, and 5 minutes for password changes.
  • If we have on-premises two-factor authentication we can most likely integrate it with ADFS but not if we have only password hash sync
  • Other security policies, like time of the day restrictions and so on.
  • Some licensing stuff requires federation

So to sum it up, please use federation

Secondly, using the Office suite from Office365 uses something called Click-to-run, which is kinda an app-v wrapped Office package from Microsoft, which allows for easy updates from Microsoft directly instead of dabbling with the MSI installer.

In order to customize this installer we need to use the Office deployment toolkit which basically allows us to customize the deployment using an XML file.  We can then use Group Policy to manage the specific applications and how they behave. Another thing to think about is using Target Version group policy to manage which specific build we want to be on so we don’t have a new build each time Microsoft rolls-out a new version, because from experience I can tell that some new builds include new bugs –> https://msandbu.wordpress.com/2015/03/09/trouble-with-office365-shared-computer-support-on-february-and-december-builds/


Office365 versions found here: http://support2.microsoft.com/gp/office-2013-365-update?

Another thing that if we want to use Office365 in conjunction with RDS/XenApp we need to have atleast E3/E4 plans which include that support. This is done using something called Shared Computer support, which allows us to install and run Office Click-to-run from a terminal server.

<Display Level="None" AcceptEULA="True" /> 
<Property Name="SharedComputerLicensing" Value="1" />

Another issue with this is that when a user starts an office app for the first time he/she needs to authenticate once, then a token will be stored locally on the %localappdata%\Microsoft\Office\15.0\Licensing folder, and will expire within a couple of days if the user is not active on the terminalserver. Think about it, if we have a XenApp farm with many servers that might be the case and if a user is redirected to another server he/she will need to authenticate again. If the user is going against one server, the token will automatically refresh.
NOTE: This requires Internet access to work.

And important to remember that the Shared Computer support token is bound to the machine, so we cannot roam that token around computers.

But a nice thing is that if we have ADFS setup, we can setup Office365 to automatically activate against Office365. This just requires that we configure some Office365 Group Policies to make that happen.

This is part of the ADMX template from Office2013


Add the ADFS domain site to trusted sites on Internet Explorer and define this settings as well


Which allows us to basically resolve the token issue with Shared Computer Support Smilefjes


We also need to add the ADFS site to Trusted Sites in Internet Explorer and specify that within the esecurity settings of trusted sites that usernames and password be automatically be used within.

Since we can’t use MKS or PVS for use with Office365 ProPlus, we need to use shared computer support. On VDI instances users can use their regular

We can also use the Office deployment toolkit to generate a package which we can deployment instead (https://support.microsoft.com/en-us/kb/2915745) we can also use this only resource to create deployment files for us.

The use of App-V package allows for easier deployment and allows our IT guys to customize which applications that should be available to the end users. This also allows to deploy it using SCS or using the Configuration Manager Connector from Citrix. This also gives us the possbility to central manage updates and specify which applications should be visible to the endusers. And also allows us to control updates in a better fashion.

Another important thing to remember is that Office is quite fond of GPU, so if hardware acceleration is enabled and there is there is GPU present, it will to Software GPU which means that the CPU has more to do.


So by all means no gpu, disable hardware acceleration (NOTE that even thou by default it is disabled if no GPU is present) but some features might not function properly)
More info here –> https://shawnbass.com/psa-software-gpu-can-reduce-your-virtual-desktop-scalability/

And another thing is that by default if we want to deploy Office within a VDI enviroment we should do some tuning on our Windows 10 machines. Did you know that by default in a VDI enviroment a Windows client OS behaves like it is communicating with Internet based devices all the time. Meaning that it is tuning the TCP accordingly. We have an PowerShell cmdlet called Get-NetTCPsetting which defines the TCP stack for a Windows client. For Windows Servers they are running the profile called datacenter, while clients even thou inside the datacenter are running using Internet. So in a VDI enviroment we can define the datacenter profile for our client computers using the cmdlet Set-NetTCPProfile

This also changes the TCP congestion algoritm to DCTCP instead of CTCP.

Microsoft also has an application called Office365 client analyzer, which can give us a baseline to see how our network is against Office365, such as DNS, Latency to Office365 and such. And DNS is quite important in Office365 because Microsoft uses proximity based load balancing and if your DNS server is located elsewhere then your clients you might be sent in the wrong direction. The client analyzer can give you that information.



Now for some reason (which will also appear later) we need to use the tradisional Office package (which is using volum license, which is not based upon a user license) we need to setup either using KMS or MAK.

So important to remember that Citrix supports use of KMS with PVS and MCS (While MAK is not supported)

So in regards to Skype for Buisness what options do we have in order to deliver a good user experience for it ? We have four options that I want to explore upon.

  • VDI plugin
  • HDX realtime
  • Local app access
  • HDX Optimization Pack

Now the issue with the first one (which is a Microsoft plugin is that it does not support Office365, it requires on-premises Lync/Skype) another issue that you cannot use VDI plugin and optimization pack at the same time, so if users are using VDI plugin and you want to switch to optimization pack you need to remove the VDI plugin

HDX realtime works with most endpoints, since its basically running everyone directly on the server/vdi so the issue here is that we get no server offloading. So if we have 100 users running a video conference we might have a issue Smilefjes If the two other options are not available try to setup HDX realtime using audio over UDP for better audio performance.

Local App access might be a viable option, which in essence means that a local application will be dragged into the receiver session, but this requires that the enduser has Lync/Skype installed. This also requires platinum licenses so not everyone has that + at it only supports Windows endpoints…

The last and most important piece is the HDX optimization pack which allows the use of server offloading using HDX media engine on the end user device


And the optimization pack supports Office365 with federated user and cloud only users. It also supports the latest clients (Skype for buisness) and can work in conjunction with Netscaler Gateway and Lync edge server for on-premises deployments. So means that we can get Mac/Linux/Windows users using server offloading, great…

Only issue is that it does not support Office Click-to-run and that it requires Enterprise licensing

Another important pieze is to remember that it requires the Lync UI (Not the Skype UI) because that is uses the Lync SDK.

Now for more of the this part, we also have Outlook. Which for many is quite the headache…. and that is most because of the OST files that is dropped in the %localappdata% folder for each user. Office ProPlus has a setting called fast access which means that Outlook will in most cases try to contact Office365 directly, but if the latency is becoming to high, the connection will drop and it will go and search trough the OST files.

(We could however buy ExpressRoute from Microsoft which would give us low-latency connections directly to their datacenters, but this is only suiteable for LARGER enterprises, since it costs HIGH amounts of $$)


But this is for the larger enterprises which allows them to overcome the basic limitations of TCP stack which allow for limited amount of external connection to about 4000 connections at the same time.

Because Microsoft recommands that in a online scenario that the clients does not have more then 110 MS latency to Office365, and in my case I have about 60 – 70 MS latency. If we combine that with some packet loss or adjusted MTU well you get the picture Smilefjes 

Using Outlook Online mode, we should have a MAX latency of 110 MS above that will decline the user experience. Another thing is that using online mode disables instant search. We can use the exchange traffic excel calculator from Microsoft to calculate the amount of bandwidth requirements.

In order to adjust this we can set something called cached mode, meaning that Outlook will store email for the last months (this is customizable) in the OST file, and the rest will need to be fetched online from Office365) We can also define that all users should go online always and have nothing cached locally but this might not give a good user experience.

This allows us to have a smaller OST file, but still have a good user experience. Now the last part is that we can’t have these OST files stored locally on each terminalserver, so we need to have good profile management solution in place in order to handle this properly. Important to note that Microsoft supports having OST files on a network share, IF! there is adequate bandwidth and low latency… and only if there is one OST file

NOTE: We can use other alternatives such as FSLogix, Unidesk to fix the Profile management in a better way.

Important to remember that Microsoft will not help troubleshoot if you are having performance related issues.

Some rule of thumbs, do some calculations!

Heavy online users generate about 20 MBps of network traffic (using online mode onoly)

Heave online users /with 3 months cached data will generate about 10 MBps of network traffic (This is only the bandwidth going directly to Office365 and does not count for the traffic that is going atainst the OST file locally)

And important to have Office Outlook over SP1 which gives MAPI over HTTP, instead of RCP over HTTP which does not consume that much bandwidth.

But we can use Profile Management to manage our OST files from a network share. Remember that the OST files in most cases are 50-80% larger then the mailbox itself because of the way it stores content, and it requires a huge deal of lantecy, plus that file locking is an issue. So for instnace if we are using Lync on one XenApp server which uses Outlook to save conversation, and then opens another connection and open Outlook there we might get errors because of the OST file locking yay!

In regardsa to OneDrive try to exclude that from XA/XD users, since the sync engine basically doesnt work very well and now that each user has 1 TB of storagee space, it will flood the storage quicker then anything else, if users are allowed to use it.

You can remove it from the Office365 configuration by adding this in the xml file

<ExcludeApp ID=»Groove» />

So anyhow, I had a great time at Citrix User Group, this year! and yet again I was part of the team that won the challenge!

Deep dive Framehawk (From a networking perspective)

Well Citrix has released Framehawk with support for both enterprise WLAN and remote access using Netscaler. In order to setup Framehawk for remote access you need to basically one thing (Enable DTLS) and of course SSL certificate rebound) DTLS is an TLS extenstion on UDP. So basically means that Framehawk is a UDP protocol. So unlike RemoteFX where Microsoft uses TCP/UDP both in a remote session, which means that it uses UDP for graphics and TCP for keystrokes and such.

So what does a Framehawk connection looks like?


External, a client uses DTLS connection to the Netscaler and then the Netscaler will use a UDP connection to the VDA in the backend. The DTLS connections has its own sequence number which is used to keeping track of connections.


There are some issues that you need be aware of before setting up Framehawk.image

Also some other notes which are important to take note of, and that Framehawk will not work properly in a VPN connection, since most VPN solutions will wrap packets inside a TCP layer or GRE tunnel which means that the UDP connection will not function as intended.


Now Framehawk is not designed for low bandwidth connections, it requires more bandwidth use then ThinWire so why is that ?

“For optimal performance, our initial recommendation is a base of 4 or 5 Mbps plus about 150 Kbps per concurrent user. But having said that, you will likely find that Framehawk greatly outperforms Thinwire on a 2 Mbps VSAT satellite connection because of the combination of packet loss and high latency.”

The reason for that is that TCP will try to retransmit packets which are dropped, while UDP which is a much simple protocol without connection setup delays, flow control, and retransmission. And in order to ensure that all mouseclick, keyboard clicks are successfully delivered Framehawk requires more bandwidth since UDP is stateless and there is no guarantee that packets are successfully deliver, I belive that the framehawk component of Citrix Receiver has its own “click” tracker which ensures that clicks are successfully delivered and to ensure that it requires more bandwidth.

Comments from Citrix: 

While SSL VPNs or any other TCP-based tunnelling like SSH re-direction will definitely cause performance problems for the Framehawk protocol, anything that works at the IP layer like GRE or IKE/IPSec will work well with it. We’ve designed the protocol to maintain headroom for two extra layers of encapsulation, so you can even multiple-wrap it without effect. Do keep in mind that right now it won’t do well with any more layers since it can cause fragmentation of the enclosed UDP packets which will effect performance on some networks.

2) While it’s based entirely on UDP the Framehawk protocol does have the ability to send some or all data in a fully reliable and sequenced format. That’s what we’re using the keyboard/mouse/touch input channels. Anything that has to pass from the client to the server in a reliable way (such as keystrokes, mouse clicks and touch up/down events) will always do so inside of the procotol. You should never see loss of these events on the server, even at extremely high loss.

And one last comment for anyone else reading this: The Framehawk protocol is specifically designed for improving the user experience on networks with a high BDP (bandwidth delay product) and random loss. In normal LAN/MAN/WAN networks with either no or predominantly congestive loss and low BDP, Framehawk will basically start acting like TCP and start throttling itself if it does run into congestion. At some point, however, the techniques it uses have a minimal amount of bandwidth (which is hard to describe since we composite multiple workloads on differnt parts of the screen). In those cases other techniques would be needed, like Thinwire Advanced. As we move down the road with our integration into existing Citrix products and start leveraging our network optimizations with bandwidth optimized protocols like Thinwire and Thinware Advanced expect that to just get better!

Nutanix and Citrix–Better together

Now Citrix has for a long time, support for most of the different hypervisors. Meaning that customers gets the flexibility to choose a number of different hypervisors if they are planning to use XenApp/XenDesktop. This support is also included for Netscaler as well.

So as of today, Citrix supports XenServer, Hyper-V, Vmware, Amazon, Cloudplatform. As well as Azure support is on the way. Meanwhile a month back, Citrix announced a partnership with Nutanix, and stating that Acropolis Hypervisor was Citrix ready for XenApp/XenDesktop and Netscaler and Sharefile as well. This means that the customers will get a better integration between the hypervisor as well as support for the product on the Nutanix Hypervisor.

Kees Baggerman from Nutanix posted this teaser on his website. Of how the integration might look like http://blog.myvirtualvision.com/2015/08/31/citrix-launches-cwc-whats-in-it-for-you/


Now this is mostly focused on the Citrix Workspace Cloud, but also stated that this is coming for tradisional on-premises XenApp/XenDesktop as well


Also looking forward to the deeper integration with for instance Machine Creation Service with for instance Shadow Clones on the Acropolis Hypervisor!

Optimizing web content with Citrix Netscaler

This post, is based upon a session I had for a partner in Norway. How can we use Netscaler to optimize web content?

Let’s face it, the trends are chaging

* Users are becoming less patient (meaging that they demand that applications/services respond quicker. (more then 40% of users drop out if the website takes mroe then 5 – 10 seconds to load) think about how that can affect a WebShop or eCommerce site ?

* More and more mobile traffic (Mobile phones, ipads, laptops. Communicating using 3G/4G or WLAN for that matter) and to that we can add that there is more data being sent across the network as well. Site web applications become more and more complex, with more code and more components as well.

* More demands to availability (Users are demaing that services are available at almost every hour. If we think about it about 5 – 10 years ago, if something was down for about 10 min we didn’t think that much about it, but now ?

* More demands to have secure communication. It wasn’t that long ago that Facebook and Google switched to SSL as default when using their services. With more and more hacking attempts happening online it requires a certain amount of security.

So what can Netscaler do in this equation ?

* Optimizing content with Front-end optimization, Caching and Compression

With the latest 10.5 release, Citrix has made a good jump into web content optimization. With features like lazy loading of images, HTML comment removal, minify JS and inline CSS.  And adding it that after content is being optimized the content can be compressed using gZIP or DEFLATE and sent across the wire (NOTE: that most web servers like Apache and IIS support GZIP and Deflate but it is much more efficient to do this on a dedicated ADC)

And with using Caching to store often accessed data it makes the Netscaler a good web optimization platform.

* Optimizing based upon endpoints.

With the current trend and more users connecting using mobile devices which are using the internett with a wireless conenction. If needs a better way to communicate as well. A god example here is TCP congeston. On a wireless you have a higher amount of packet loss and this requires using for instance TCP Congestion Westwood which is much better suites on wireless connections. Also using features like MTCP (on supported devices) allows for higher troughput as well. And that we can place different TCP settings on different services makes it much more agile.

* High availability

Using features like load balancing and GSLB allows us to deliver a high availabilty and scale solution. And using features like AppQOE to allows us to prioritize traffic in a eCommerce setting might be a valuable asset. Think the scenario if we have a web shop, where most of our buying customers come from a regular PC while most mobile users that are connecting are mostly checking the latest offers. If we ever where to reach our peak in traffic it is useful to prioritize traffic based upon endpoint connecting.

* Secure content

With Netscaler it allows us to create specific SSL profile which we can attack to different services. For instance older applications which are used by everyone might not have the high requirement regarding security, but on the other hand PCI-DSS requires a high level of security. Add to the mix that we can handle many common DDoS attacks on TCP level and on HTTP. We can also use Application firewall which handles many application based attacks, when an own learning feature it can block users which are not following the common user pattern on a website. And that we can specify common URLs which users are not allowed to access.

So to summerize, the Netscaler can be a good component to optimizing and securing traffic, with a lot of exiting stuff happening in the next year! Smilefjes stay tuned.

Setting up a secure XenApp enviroment–Storefront

So this is part two of my securing XenApp enviroment, this time I’ve moved my focus to Storefront. Now how does Storefront need to be secured ?

In most cases, Storefront is the aggregator that allows clients to connect to a citrix infrastructure. Im most cases the Storefront is located on the internal network and the Netscaler is placed in DMZ. Even if Storefront is located on the internal network and the firewall and Netscaler does alot of the security work, there are still things that need to be take care of on the Storefront.

In many cases many users also connect to the Storefront directly if they are connected to the internal network. Then they are just bypassing the Netscaler. But since Storefront is a Windows Server there are alot of things to think about.

So where to begin.

1: Setting up a base URL with a HTTPS certificate (if you are using a internal signed certificate make sure that you have a proper set up Root CA which in most cases should be offline. Or that you have a public signed third party CA. Which also in many cases is useful because if users are connecting directly to Storefront their computers might not regonize the internally signed CA.


2: Remove the HTTP binding on the IIS site. To avoid HTTP requests.

Use a tool like IIS crypto to disable the use of older SSL protocols on IIS server and older RC ciphers


You can also define ICA file signing. This allows for Citrix Receiver clients which support signed ICA files to verify that the ICA fiels they get comes from a verified source.  http://support.citrix.com/proddocs/topic/dws-storefront-25/dws-configure-conf-ica.html

3: We can also setup so that Citrix Receiver is unable to caching password, this can be done by changing authenticate.aspx under C:\inetpub\wwwroot\Citrix\Authentication\Views\ExplicitForms\

and you change the following parameter

<% Html.RenderPartial(«SaveCredentialsRequirement»,
              SaveCredentials); %>

<%– Html.RenderPartial(«SaveCredentialsRequirement»,
                SaveCredentials); –%>

4: Force ICA connections to go trough Netscaler using Optimal Gateway feature of Storefront –> http://support.citrix.com/article/CTX200129 using this option will also allow you to use Insight to monitor clients connection to Citrix as well, and depending on the Netscaler version give you some historical data.

And with using Windows pass-trough you can have Kerberos authenticating to the Storefront and then have ICA sessions go trough the Netscaler –> http://support.citrix.com/article/CTX133982

5: Use SSL in communication with the delivery controllers –> http://support.citrix.com/proddocs/topic/xendesktop-7/cds-mng-cntrlr-ssl.html

6: Install Dynamic IP restrictions on the IIS server, this stops DDoS happning against Storefront from the same IP-address

 IIS fig4

7: Windows updated!  and Antivirus software running (Note that having Windows updated, having some sort of antivirus running with limited access to the server) also let the Windows Firewall keep runnign and only open the necessery ports to allow communication with AD, Delivery Controllers and with Netscaler.

8: Define audit policies to log (Credential validation, Remote Desktop connections, terminal logons and so on) https://technet.microsoft.com/en-us/library/dn319056.aspx

9: Use the Storefront Web Config GUI from Citrix to define lockout and session timeout values


10: Use a tool like Operations Manager with for instance ComTrade to monitor the Storefront Instances. Or just the IIS management pack for IIS, this gives some good insight on how the IIS server is operating.

11: Make sure that full logging is enabled on the IIS server site.

IIS Logging Configuration for System Center Advisor Log Management

Stay tuned for more, next part is the delivery controllers and the VDA agents.

Setting up a secure XenApp enviroment– Netscaler

Now I had the pleasure of talking PCI-DSS compliant XenApp enviroment for a customer. Now after working with it for the last couple of days there are lot of usefull information that I thought I would share.

Now PCI-DSS compliance is needed for any merchant who accepts credit cards for instance an e-commerce size. Or using some sort of application. So this includes all sorts of

* Different procedures for data shredding and logging

* Access control

* Logging and authorization

Now the current PCI-DSS standard is in version 3 –> https://www.pcisecuritystandards.org/documents/PCI_DSS_v3.pdf

The different requirements and assesment procedures can be found in this document. Now Citrix has also created a document for how to setup a compliant XenApp enviroment https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/pci-dss-success-achieving-compliance-and-increasing-web-application-availability.pdf you can also find some more information here –> http://www.citrix.com/about/legal/security-compliance/security-standards.html

Now instead of making this post a pure PCI-DSS post I decided to do a more “howto secure yout XenApp enviroment” and what kind of options we have and where a weakness might be.

Now a typical enviroment might looks like this.


So let’s start by exploring the first part of the Citrix infrastructure which is the Netscaler, in a typical enviroment it might be located in the DMZ. Where the front-end firewall has statefull packet inspection to see what traffic goes back and forth. The best way to do a secure setup of Netscaler is one-armed mode and use routing to backend resources and then have another Firewall in between to do deep packet inspection.

First thing we need to do with Netscaler when setting up Netscaler Gateway for instance is to disable SSL 3.0 and default (We should have MPX do to TLS 1.1 and TLS 1.2 but with VPX we are limited to TLS 1.0

Also important to remember th use TRUSTED third party certificates from known vendors, without any known history. Try to avoid SHA-1 based certificates, Citrix now supports SHA256.

Important to setup secure access to management only (since it by default uses http)


This can be done by using SSL profiles which can be attached to the Netscaler Gateway


Also define NONSECURE SSL renegotiation. Also we need to define some TCP parameters. Firstly make sure that TCP SYN Cookie is enabled, this allows for protection against SYN flood attacks and that SYN Spoof Protection is enabled to protect against spoofed SYN packets.


Under HTTP profiles make sure that the Netscaler drops invalid HTTP requests


Make sure that ICA proxy migration is enabled, this makes sure that there is only 1 session at a time established for a user via the Netscaler


Double hop can also be an option if we have multiple DMZ sones or a private and internal zone.

Specify a max login attempts and a timeout value, to make sure that your services aren’t being hammered by a dictonary attack


Change the password for the nsuser!!!


Use an encrypted NTP source which allows for timestamping when logging. (Running at version 4 and above) and also verify that the timezones are running correctly.


Sett up a SNMP monitoring based solution or Command Center to get monitoring information from Netscaler, or use a Syslog as well to get more detailed information. Note that you should use SNMP v3 which gives both Authentication and encryption.

Use LDAPS based authetication against the local active directory server, since LDAP is pure-text based, and use TLS not SSL, and make sure that the Netscaler verifies the server certificate on the LDAP server


It also helps to setup two-factor authentication to provide better protection against user thefts. Make sure that if you are using a two factor authentication vendor that it uses CHAP authentication protocol instead of PAP. Since CHAP is much more secure authentication protocol then PAP

Use NetProfiles to control traffic flow from a particular SNIP to backend resources (This allows for easier management when setting up firewall rules for Access.


Enable ARP spoof validation, so we don’t have any forging ARP requests where the Netscaler is placed (DMZ Zone)


Use a DNSSEC based DNS server, this allows for signed and validated responses. This way you cannot its difficult to hijack a DNS or do MITM on DNS queries.  Note that this requires that you add a nameserver with both TCP and UDP enabled. (Netscaler can function as both a DNSSEC enabled authoritative DNS server and proxy mode for DNSSEC)

If you wish to use Netscaler as an VPN access towards the first DMZ zone, first things you need to do is

1: Update the SWOT library


Create a preauthetnication policy to check for updated antivirus software


Same goes for Patch updates


In most cases try to use the latest firmware, Citrix does release a new Netscaler firmware atleast one every three months which contains bug fixes and security patches as well.

Do not activate enhanced authentication feedback, this enabled hackers to learn more about lockout policies and or if the user is non existant or locked out, disabled and so on.


Set up STA communication using HTTPS (Which requires a valid certificate and that Netscaler trusts the root CA) You also need to setup Storefront using a valid certificate from a trusted Root CA. This should not be a internal PKI root CA since third party vendors have a much higher form a physical security.

If you for some reason cannot use SSL/TLS based communication with backend resources you can use MACSec which is a layer 2 feature which allows for encrypted traffic between nodes on ethernet.