Security settings–NetScaler Gateway

NOTE: This is content from my eBook but to make it easier to search, based upon the number of queries I get I decided to publish it on my blog

Security settings

When setting up a NetScaler Gateway it will be in most cases open externally for remote access to deliver Citrix to remote workers. Now by exposing a service externally you also open up yourself for attacks. There are many possible attack vectors

· Bruteforce attacks

· DDoS

· Protocol weakness

· Security exploits

Therefore, it is important to think about this when setting up NetScaler Gateway virtual server. Now when setting up a smart access server and allowing full VPN access for your endpoints you need to take extra care when setting up our policies. Therefore, this section is separated into different groups which list different settings we can configure to have a higher level of security on our virtual server.

General settings:
Under NetScaler Gateway à Global Settings à Change authentication AAA settings à Define Max Login Attempts and then define Failed Login Timeout. This help to avoid dictionary attacks by locking out authentication attempts after a certain amount of attempts.

Here we also have the enhanced authentication feedback button, which helps end users by notifying them what is wrong when they try to login, but it can also expose some critical information to malicious attackers.


This setting can either be defined globally or per virtual server, but it we are using multiple virtual servers the best is to configure this globally so it affects all virtual servers.

Session Policies:

If we are implementing full VPN solution, we can also specify multiple settings depending on what we want. The best practice is to not specify full access but use Split tunneling and specify intranet applications for those application that the end-users needs access to. This way only traffic destined to those applications will be processed by the NetScaler Gateway plugin.

In most cases also an end-user might not require access for a really long period of time and might forget to disconnect the session. In that case we can setup a timeout which decides when a session should be forcefully disconnected. This is done under session policies à Network Configuration à Advanced Settings.


It is also useful to have more specific session policies depending on what type of resource is trying to connect. For instance, we can have a session policy using OPSWAT expressions to avoid non-healthy endpoints to connecting to our environment.

For instance, a session policy with OPSWAT rules to determine if the endpoint is running an authentic antivirus solution


If the endpoint does not match the requirements, they will not get any access to the Citrix environment. The problem with this is that it happens after authentication has occurred, we can also use Preauthentication policies to do health checks before authentication, but then we cannot filter based upon AAA groups and users for instance.

In addition, we can use these settings in conjunction with Smart Access to control how the access to the Citrix environment and which group policies should be processed.

We can also specify idle-time out values, in the session profile together with split tunneling and session time-out


Now again an issue is if an attack has access to an end-users username and password and even has access to the end users device, then the attack will be able to access the environment. When possible try to add a two-factor authentication feature to minimize these types of attacks.

That way even if an attacker has access to the end users username and password they will not be able to login to the environment.

In addition, if we are not using Split tunneling, we should configure Authorization rules, which we can bind to the NetScaler gateway to define, ALLOW/DENY rules to internal resources using client expressions, which are then bound to AAA users or groups.

If this is not possible. Define ACL rules based upon the Intranet IP range that is defined as part of the NetScaler Gateway.

Now a lot of people focus on the SSL/TLS configuration of the virtual server, while that itself is important it should be part of the bigger picture since that only addresses the protocol exploits of SSL/TLS and might allow a malicious attacks to decrypt the secure connection and then do MiTM, while theoretically possible not easily achieved.

Now by default when configuring SSL/TLS Settings on NetScaler we can either use SSL profiles or use SSL parameters for each virtual server. If we use profiles, we cannot configure SSL parameters and the other way around.

NOTE: We also have the option to enable a global default SSL profile, which will be attached to all SSL protocol based virtual servers. This will use the ns_default_ssl_profile_frontend policy for front-end facing virtual servers. This can be enabled under Traffic Management à SSL à Change advanced SSL settings à enable default SSL profile, and take note after you enable it you cannot disable it.

The different SSL profiles can be viewed under System à Profiles à SSL Profile, by default there are two profiles one for front-end connections (for instance virtual servers) while the other are for backend connections (services, service groups)

Now there four main features that effect the security using TLS/SSL protocol

· Certificate (Private Key size, what does the certificate support?)

· Protocol Use (SSL or TLS?)

· Ciphers (Define how strong algorithm that should be used for encryption and which algorithm should be used for authenticity and authentication) Ciphers are attached to an SSL profile as well.

NOTE: There is a website called, which is commonly used in conjunction with testing SSL/TLS security level on web services, where the score goes from F to A+ where A+ is the best possible score. This can only be achieved on the Gateway virtual server if it only uses the more secure protocols and Ciphers which give a high level of encryption and if we have a valid certificate. Again, I have to emphasize that this only addresses protocol weaknesses.

For our virtual server to score A+ on test there are some modifications that need to be done again the SSL Profile or using SSL parameters.

· Bind the entire certificate chain to the virtual server, which means the certificate, any intermediate certificates and root certificates

· Deny SSL Renegotiation (This is used from a client to renegotiate which protocol to use, which might be used for attackers to lower a session from TLS 1.2 to a SSL version with lower security. Settings it to FRONTEND_CLIENTSERVER will disallow renegotiation.


· Make sure that SSL3 is disabled (This is disabled by default in the default profiles and should be reflected in the frontend profile)


· Specify a supported Cipher group, which ensures a high-level of encryption, this is added under the SSL profile as well. A Cipher group specified which SSL/TLS protocol that should be used and which type of encryption.

Another thing to be aware of is that some options are available for only front-end connections, but not backend connections. Another thing is that not all ciphers are available for VPX editions. If you try to create an cipher group of ciphers which are not supported on the VPX you will get an error message.

· The simplest way is to create a cipher group using CLI:
VPX Example:
add ssl cipher vpx-ciphers
bind ssl cipher vpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256
bind ssl cipher vpx-ciphers
-cipherName TLS1-ECDHE-RSA-AES256-SHA
bind ssl cipher vpx-ciphers -cipherName TLS1-ECDHE-RSA-AES128-SHA
bind ssl cipher vpx-cipher-list -cipherName TLS1-AES-256-CBC-SHA
bind ssl cipher vpx-cipher-list -cipherName TLS1-AES-128-CBC-SHA

· MPX Example:
add ssl cipher mpx-ciphers
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES256-GCM-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES128-GCM-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-256-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1-ECDHE-RSA-AES256-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1-ECDHE-RSA-AES128-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1.2-DHE-RSA-AES256-GCM-SHA384
bind ssl cipher mpx-ciphers -cipherName TLS1.2-DHE-RSA-AES128-GCM-SHA256
bind ssl cipher mpx-ciphers -cipherName TLS1-DHE-RSA-AES-256-CBC-SHA
bind ssl cipher mpx-ciphers -cipherName TLS1-DHE-RSA-AES-128-CBC-SHA

· Implement HSTS and HTTP -> HTTPS redirection

One of the last things we need to configure is HSTS (HTTP Strict Transport Security) which is a security mechanism which is in place to protect websites against protocol downgrade attacks and cookie hijacking. It allows the NetScaler to notify web browsers that it should only interact which its services using HTTPS. This is a feature which Google implemented into Chrome, but other browsers such Firefox and Internet Explorer now support it. In order to configure it there are multiple steps.

· Have a valid certificate on the web-service (Root, any intermediate and server CA)

· Redirect all traffic from HTTP to HTTPS

· Serve an HSTS header on the base domain for HTTPS requests with header
Strict-Transport-Security: max-age=10886400; includeSubDomains; preload

· After this is done we can submit this to the Google chrome preload list here à

Now to first do implement HTTP to HTTPS the simplest way is to setup a simple load balancing virtual server on HTTP port 80 using the same IP as the NetScaler Gateway virtual server and then setting up a redirect.

NOTE: If you use the NetScaler Gateway wizard in NetScaler to configure NetScaler Gateway it uses this setup to configure HTTP to HTTPS redirect.

Go into Traffic Management à Load balancing à Virtual Servers. Click Add and give it a descriptive name and enter the same IP address of the NetScaler Gateway virtual server, and using HTTP as protocol as port 80.


Click OK, when asked to bind service to the virtual server click Continue. Click on the protection pane on the right side and there under Redirect URL, ether the FQDN of the NetScaler Gateway virtual server using HTTPS.


After that click OK and we are done.
Then we need to implement and HTTP rewrite policy that can insert the HSTS header. Go into AppExpert à Rewrite à Go into Actions first and click Add.

Give it a name like INSERT_HSTS_HEADER, under type choose INSERT_HTTP_HEADER, under header name enter Strict-Transport-Security under expression enter “max-age=157680000” and then click Create.


Then go back to the rewrite menu. Go into Policies and then click Add. Give it a name IMPLEMENT_HSTS_HEADER for instance and under Action choose the rewrite action we created, under expression use the expression true


Then click add. After we are done with this we need to add the rewrite policy to the NetScaler Gateway virtual server. To into NetScaler Gateway à Virtual Server à Choose the existing virtual server click edit à Policies, choose Rewrite and choose Response.


And then bind the existing rewrite rule we created, and click OK, and then we are done with the HSTS configuration.

The simplest way to confirm the HSTS settings and ciphers are properly setup is either using and do a test or using developer tools in Internet Explorer. This can be accessed clicking F12 within Internet Explorer and look at the HTTP header when connecting to the NetScaler Gateway virtual server.




NOTE: The simplest way to test ciphers groups when configuring NetScaler Gateway is using OpenSSL, which can be used for this purpose more info on this blogpost here à

Splunk and NetScaler together

I was reached out to a while back and was asked if Splunk and NetScaler worked together? To be honest I haven’t tried that combination yet. So last night we decided to give it a try, using our regular Splunk setup.

Now in order to setup Splunk with NetScaler we need an IPFIX collector setup on the Splunk server, and this is possible using this Splunk addon for IPFX which can be found here –>

This allows us to gather data using IPFIX into Spunk. And for those that are not aware, Citrix Appflow is basically just an IPFIX protocol which is raw binary-encoded data. In order for the IPFIX collector to be able to intertept this data the IPFIX sender needs to send across the templates to the collector. So when we first setup Splunk and NetScaler we will notice that data is not immediately interpret because it does not have the templates available and data will be listed as

TimeStamp=»2014-07-16T21:00:04″; Template=»264″; Observer=»1″; Address=»″; Port=»2203″; ParseError=»Template not known (yet).»;

We can specify on the Appflow settings of the NetScaler how often it should send across the templates settings and we can also specify which settings we want in the AppFlow data that is exported to the collector


which is by default set to 10 minutes (I’ve changed it down to 1 minute) but no worry you do not have to change the defautl value, just want because Splunk will buffer the templates, after this we are good to go. Now if we are used to using Citrix Insight for instance it will only get information which is related to ICA sessions or web session, the IPFIX flow from NetScaler actually delivers alot of more useful information.

First add Splunk as an AppFlow Collector


Configure an AppFlow Action which is bound to the collector


Lastly define a Policy which defines which action to trigger if the NetScaler should generate IPFIX flow for a session.


We can use the general expression true which will in essence generate IPFIX traffic for everything… Load balancing, AppFirewall, Syslog, ICA sessions and so on. If we want to filter what we want NetScaler to send to the collector we can use general HTTP expressions like URL, User-Agents like we typically use for session policies to filter out based upon Citrix ICA session for instance.

NOTE: After we have created the policy we have to bind it to a gateway virtual server or globally.

After that is done we have to do the Splunk part. Log into the splunk console and go into the apps menu and choose Install Apps from file. From there point to the IPFIX file which can be found in the link I listed earlier.

When that is configured you should notice that there is an IPFIX data input. By going into Settings –-> Data Inputs


If there isn’t a number that just click Add New and enter the default settings and give it a name. Now when you see that AppFlow records are generated on NetScaler (Which can be easily seen by using the command

Stat Appflow


They should also be appearing in Splunk.  Go back to the main menu and choose the Search and reporting option


In the search option there we can just use the search prefix source=”NSIPOFNETSCALER:*” to see which data that has come from the NetScaler.


So notice there is a lot of data here since I choose the true expression in the appflow settings, but I can easily do sorting between the different settings. So let’s say I want to get all users which have accessed Citrix NetScaler Gateway?

source = «*» | stats count by netscalerAaaUsername


Citrix Receiver versions connecting?


Endless possibilities with this module and being able to instant search data and it can also do syslog check as well, for instance if we made some changes to NetScaler.


VMware Horizon 7 with Blast Extreme and Nvidia vGPU

So when reading about the upcoming release notes with Horizon 7 I was really interesting when it came to the new GPU and Protocol features with Horizon View 7. As I have blogged about earlier, we saw that Blast Extreme is essentially an TCP based remote display protocol –-> (I can also be configured to use UDP, but by default it is set to TCP)

We also noticed that Blast Extreme wasn’t really bandwidth friendly compared to PCoIP, while it have a much better screen quality and it did a much better job when it came to increased latency and packet loss. Now as part of Blast Extreme we also have the option to do H.264 decoding if we have an GPU that supports it.

NOTE: That this requres view client version 4 and above. Now H.264 decoding has the following restrictions: 

  • Multiple monitors are not supported. 
  • The maximum resolution that is supported depends on the capability of the graphical processing unit (GPU) on the client. A GPU that can support 4K resolution for JPEG/PNG might not be able to support 4K resolution for H.264. If a resolution for H.264 is not supported, Horizon Client uses JPEG/PNG instead.

Defining use of H.264 can be done from within the Horizon View client


Now I was fortunate enough to be able to borrow access to a demo enviroment from, which was running a Dell R720 server with K2 cards.  This was a enviroment which was running all the default settings with on proper tuning what so ever. With two VDI instances running Windows 10, which had the same resources available to them. On top of that this enviroment was also running NSX, will this affect the performance? let’s see.

the VDI instance running with vGPU was assigned an K280 template to it


the other VDI instance was just running plain SVGA 3D.


Since I simply borrowed the lab I didn’t want to do to much harm to it Smilefjes So I did some simple load testing, which was to conduct something that pushed the bandwidth usage to the roof and pushed the CPU on the VDI instances. Simple enough I did a Youtube clip in full 1080P to do a simple comparison

vGPU VDI instance


Software VGA VDI instance


This was a sample for Youtube clip, in 3d based tests I noticed that the bandwidth difference between vGPU and Software VGA was about up to 50% lower.

Now the most impressing isn’t just the bandwidth usage, but the screen quality difference was quite huge! which didn’t show in the networking testing is the higher amount of FPS.

Another metric is the CPU % usage on the VDI instances.

vGPU VDI instance


This is more of the fact that it can offload the video processing to the GPU.

Software VGA


Hopefully soon I will be able to test a Dell R730 server with M60 cards Smilefjes and do some more in-depth analysis as well. You can also read more on the VMware blog here –>

Think you’ve seen all the features coming in Windows Server 2016? Think again!

There is a alot of buzz happening around Windows Server 2016 these days, and no wonder! There is so much development happening around the 2016 release and most people can’t wrap themselves around it. Most people are concerned about the price increase for 2016, but again after reading trough this post you will understand WHY they will increase the price.

Now if you look at the buzz-words flowing around IT these days, there is much around.

  • Containers
  • HybridIT
  • Software-defined
  • GPU
  • DevOps

Now Microsoft looks at this and also at what the competitors were doing and thinking, how can we compete in this space? or where are we lacking ? Alot happend in the 2012 release both in terms of management but also in terms of features. Much was happening in Hyper-V, Networking (NIC teaming, NVGRE) and in terms of storage with Storage spaces support.

So what is happening in 2016? Well most people are caught up in Containers and Nano Server and Hyper-V even thou they are important updates and I welcome them, Microsoft has its eye on the bigger picture, which I will draw on the end….

So let’s explore the upcoming features, and let’s start with Storage which always interesting to take a closer look at.


  • Storage Replica (Not to be confused with DFS-R, allows us to do volume based replication between (server-server, disk-to-disk on the same server, cluster-to-cluster of storage, regardless of vendor. This type of feature also opens up for streched-cluster feature between two datacenter sites for instance.
  • Storage Spaces Direct (Allows us to setup Storage Spaces across multiple nodes using local disks. Which was the natural next step for Storage Spaces, which allows customers to setup simple, cost effective highly-available storage solutions) This feature can also be combined with Hyper-V which allows for the most cost-effective hyperconverged solution on the marked.
  • Deduplication (which before was limited to a set amount of data and to one CPU core (single thread)  but has now been updated to support up to 64 TBs of data and now can run on multiple threads! (Still no support for Hyper-V based workloads)
  • ReFS becomes a more of a defacto standard for Hyper-V workloads running on SMB shares with the use of ReFS accelerated VHDX operations which speeds up the process to created fixed disks and do checkpoint operations.

So if we look at the feature set, its most based on activating hyperconvergance, and adding a vendor netrual storage replication option also opens up for other cluster scenarios. Now even thou it is still in tech preview, I’m missing some options to do dedup for hyper-v workloads and also delivering Storage Spaces direct and Hyper-V with data locality options.


Now there also alot happening around Hyper-V! and some of the problems that is has been facing is that…well it still alot of Windows Server in there, even thou we have server core the promised “less patching” scenario wasn’t fullfilled as intended. So with the introduction of Nano-server, Hyper-V is becoming.. well more like ESXi in terms of small footprint, CLI only and remote management only, which should be the core focus when setting up Hyper-V, having a rock solid foundation which the virtualization platform should stand on.

  • Discrete Device Assigment (Allows us to do Passtrough from a physical PCI-e device to a virtual machine, this also opens up for GPU passtrough for instance, which is the same feature which is coming in the N-series in Azure)
  • Shielded Virtual Machines (Allows to do a complete lockdown of the virtual machine, which might be an important requirement in a service provider enviroment, so that the virtualization administrator does not have any access to the virtual machine to the customer what’s so ever)
  • PowerShell Direct (Allows us to open Powershell connections directly to virtual machines without the need for network access. Opens up to do easy automation before the network is connected)
  • SET Switch-embedded teaming (Is a new concept which combines NIC teaming with a Hyper-V switch, in 2012 R2 these were two seperate logical objects but have been combined in 2016. It also allows for RDMA based NIC teaming with Hyper-V swithes which also is a new feature.
  • Production checkpoints (So I’m guessing most people have been confused when Microsoft renamed snapshots to checkpoints? well time to get even more confused. Microsoft now has two types of checkpoints, production which is the default and preffered to do backups with, while standard checkpoints is the old fashioned way to do snaphosts with also include the running memory.
  • Cluster resillency, both in terms of doing node quarantine and pausing virtual machines if storage goes down.
  • CBT (Hyper-V will now include change block tracking, no more vendors specific CBT filter drivers, yay!)
  • Additional Hyper-V settings (We can now add additional extensions into Hyper-V switch, and one of those are an Azure switch extenstion to do traffic forwarding.
  • Memory and vNIC extensibility at runtime (allows us to increase memory and hot-add NIC cards at runtime for virtual machines)
  • Rolling cluster upgrade from 2012 R2 to 2016 (Allows to do node based update from 2012 R2 to 2016 without taking down the cluster)
  • Nested virtualization support as well!

So in 2012 and R2 we could see that Microsoft was focusing on doing alot of catch up with VMware in terms of scaleability, with this release, it is focused on taking all the features they used in Azure like the switch FVP extension and making the cluster even more resillient, and adding feature which should ALREADY be present like CBT in the product.

Remote Desktop Services

Anyone working in EUC land knows that there is alot of big fights happening between Citrix and VMware these days, well Microsoft wants in. With the 2012 release, Microsoft did alot in terms of managment to make it alot simpler which they did, they also did a bit on the GPU space, but Citrix and VMware took the train even further there.. Now with 2016 we can see that Microsoft is doing alot to catch up

  • Remote FX vGPU for Generation 2 virtual machines
  • Remote FX vGPU for Server virtual machines running RDSH
  • RemoteFX vGPU support with OpenGL 4.4 and OpenCL 1.1 AP
  • Personal session desktops, allows us to provision an RDSH session host to VM to each user (Yes! Microsoft is actually bypassings its own licensing rules in terms of VDI and using server OS to provision this) This feature is of now only in PowerShell
  • AVC444 mode

Now it makes sense that Microsoft is adding of GPU capabilities to their operating system since it is also the base for Azure, which allows them to port these features easily to Azure when they have updated the base to 2016 as well.

Now I’ve gone trough alot of the different features in RDS, Hyper-V and Storage. While the new features that are coming here are most welcome, the one piece where Microsoft did the most work in the 2016 release is without a doubt the networking stack. Which is interestingly the part which has the least amount of documentation as well. So what’s new?

  • SET (Switch embeeded teaming) talked about earlier
  • PacketDirect (Now this is where things get interesting, Microsoft has been using the default NDIS stack for many years now, but that is a general purpose networking stack which focuses on bluetooth, WLAN, LAN and so on, and is it really made for pure datacenter connections. With more and more networking features becoming NFV (virtualized network feature) Microsoft needed to change their stack to be able to process more packets with less overhead, and this is where packetdirect comes into the picture. Want to know more about this feature there is an excellent Youtube video here –>
  • VXLAN tunneling support (Boy this is a bit embarrassing, Microsoft pushed NVGRE tunneling protocol in 2012 R2 (Which allowed to stretch L2 over L3 network (Or L3 across L3) while VMware was pushing VXLAN and is also part of their NSX portfolio. Problem with NVGRE is that it is based upon GRE while VXLAN used UDP, the purpose of each tunneling protocol was pretty much the same, but still VMware and the other vendors focused on VXLAN and therefore Microsoft went in another direction. NVGRE is still supported, but for instance when you setup Azure Stack the default protocol is VXLAN
  • Azure flow engine inside the Hyper-V switch
  • Distributed datacenter firewall (This is one of the first NFV features in 2016 which is a 5-tuple stateful multitenant firewall solution which can be set pr virtual machine, which also allows for microsegmentaiton features as well. Think about the possbilities and not actually need a physical firewall to protect east-west traffic inside the firewall?
  • Software Load balancer (Another NFV feature which allows us to setup load balancing against virtual machines running on Hyper-V, this is using Direct Server Return is pretty much the same load balancing capability which is used in Azure
  • Network Controller (This is pretty much the brains of the network in 2016, which will allow us to manage and automate the entire network using this Windows Server Role. The Network Controller is a server roles which has the ability to manage all of the virtual networks (VXLAN, NVGRE, Software load balancer, Distributed Datacenter Firewall, Service chaning) this feature is open to manage using REST APIs or using Virtual Machine Manager and Operations Manager. This role will also be used to “map” the network both extending the virtual and the physical network. This role will also be able to “talk” to physical networks devices and allow for monitoring and automation. Which I’m guessing is the natural extension of OMI support which was available in 2012. The network controller also supports OVSDB!

Now if you think about it, being able to squeeze more packets trough the network is crucial for all service running “above” to work even better. So PacketDirect is a nice addition to the mix, and moving other features like load balancing and firewall as well which is already included in Azure to the on-premises stack allows us to do more with less. 

Now back to some of the core features which fall outside of any main category we have nano server which is an headless server deployment. As mentioned earlier the promise of Server Core didn’t actually do that much in terms of patches needed.


Based upon 2015 patch numbers. This is where Nano server is going to ramp up to give an even better foundation to stand one, where we actually have an operating system which basically only supported the features we need. With a footprint of 400 MBs, and is not a default OS deployment option we have to make our custom Nano server image to actually deploy it.

As well with the introduction of Containers it allows us to use Windows for a totally different purpose, being able to configure container which is basically doing core operating system runtime isolation. Instead of having virtual machines with deletage resource to the underlying operating system in the virtual space, containers can essentially slice the OS into multiple parts and allow for multiple runtimes with its own IP-address and runtime enviroment. Now this is essentially a place for microservices, web-services but we will see ALOT more happening her over the years since alot of development being done in Azure in terms of mesosphere support and so on.

So what can we expect from Windows Server 2016? Nothing much GUI polishing, Microsoft is dedicated to improve their core features and extent on those, where the core focus on

  • A solid robust infrastructure using Nano Server
  • Expanding into Hyperconvergence and streched cluster scenarios.
  • Expanding network capabilities and take what they learned from Azure and implementing many of the existing features from there like load balancing and firewall options
  • Moving more into GPU options which also allows them to integrate into Azure as well.

So as well can see, is that Microsoft is pushing both ways, stuff are implemented into Windows Server because they want it in Azure, and on the other way we can see that stuff that is developed in Azure is also moved to the on-premises Server product. So now that we have all these products we have an QUITE DIFFERENT scenarios that we can setup and opens up for a lot of possbilities.


So is 2016 worth the increase in price? You bet! Should it affect those customers who don’t need all these fancy features? hell no

Troubleshooting ICA-proxy and authentication sessions NetScaler

This is a section of my latest eBook, but I figured that it could be more useful as a blog-section which people could reference if needed and also makes it easier for me to update when new stuff appers to give a simple resolution for known errors.

Cannot complete your request

After logging into the NetScaler Gateway and the enduser is redirected to the StoreFront page you get the error message “Cannot Complete your request”


You can also notice that you get an error in event viewer of the storefront server under Application and Services Logs -> Citrix Delivery Services. Where you get an error message of “None of the AG Call back service responded”


This is often the case if Storefront cannot talk back with the callback URL which is listed under Manage NetScaler Gateways à Edit NetScaler Gateway à Authentication Settings à Callback URL. Make sure that this URL is accessible from the Storefront server. If this is not possible because of network segmentation. You can deploy a dummy NetScaler Gateway VIP in the internal network.

If you notice that you have an error in Event viewer stating that “Citrix AGBasic Login request has failed”. That might be that there are different domains specified on the NetScaler session policy and under Storefront. If you have specified a domain name in Storefront under Manage Authentication à Pass-through from NetScaler Gateway à Configure trusted domains, this needs to be the same domain name in the session policy as well.

If you note that you have an error in Event viewer stating that Failed to run discovery this is most likely the case if you have not configured the use of a proper SSL certificate under the IIS administration console of the Storefront server.

Your logon has expired

If you are prompted for another authentication after logging into the NetScaler Gateway portal, when redirected to the Storefront portal, and then this error message appears.


You can also notice an error in event viewer of the storefront server under Application and Services Logs -> Citrix Delivery Services. That states, “A request was sent to service that was detected as passing through a gateway, but none of these matched the request.


This is typically the case if the NetScaler Gateway URL is configured wrongly. Since this URL needs to be the same as what the end-users are using, in case Storefront will not trust the incoming request and therefore ignore authentication attempts.


Unknown Client error 1110

This is a generic error which might occur in many different scenarios, but some key things to check to find the root cause of the issue.


· STA available on the NetScaler and marked as up? (This can be checked under NetScaler Gateway à Virtual Server à Published Applications à STA Server.

Cannot Start Desktop “COMPUTERNAME”

If you try launching an application or desktop and you get the error message cannot start Desktop/Application name after authenticating and getting the resources up


This might just be that the resource that we are trying to launch in currently unavailable or that something for instance is wrong with the VDA agents on that resource we are trying to launch.

We can also go into event viewer of Storefront to take a closer look at what kind of error is actually happening. Event viewer à Application and Services Logs -> Citrix Delivery Services. If we get an error message here stating “All the configured Secure Ticket authorities failed to respond”


This might be that we have an STA server that is down, in which Storefront tries to communicate with or that we have configured the wrong STA server under NetScaler Gateway appliances in Storefront. This can be checked under à Manage NetScaler Gateways à Edit NetScaler Gateway à Secure Ticket Authority.

Error: Login exceeds maximum allowed users

When logging in you get an error message stating that login exceeds maximum allowed users. This is typically the case if we did not place the virtual server in ICA-only mode. By default, the global AAA settings of NetScaler Gateway is set to allow maximum 5 users logging in using VPN at the same time. If we go and change the settings of the Virtual server to ICA-only mode, this error will go away.

Http/1.1 Internal Server Error 43531

After authenticating to the NetScaler Gateway portal you get a blank page with an error message stating Http/1.1 Internal Server Error 43531. This is typically the case if the Gateway cannot communicate with the Storefront web site. Which might just be a wrong URL in the session policy for instance.

Or this can also be that a client is not being applied a session policy, if we for instance have session policies in place based upon different criteria’s. If someone outside those criteria would get those error messages. The easiest way to get them access is to bind a session policy with the highest priority number with an expression of ns_true.

403 – Forbidden: Access is denied

After authenticating to the NetScaler Gateway portal, you get a default IIS error message stating “Access is denied”. This is typically the case if the session policy does not point directly to the receiver for web site on Storefront. After changing, the session policy to point to the direct URL this error message will go away.


In case of authentication failure attempt, a user will be given a generic error message of:


There are many ways to troubleshoot authentication failures, the simplest one is using the authentication dashboard in the NetScaler UI.


Which basically list the syslog events directly into the UI. Another way is using CLI. Log into the NetScaler appliance using an SSH client, type Shell and then type cat /tmp/aaad.debug

This will in real-time list out all AAA attempts happening against the NetScaler. Now by default the NetScaler does not list out detailed information whenever a user has an expired password or if their account is disabled. However, there is a feature which we can enabled which can give more detailed information back to the end user. This feature is called Enhanced Authentication Feedback

Which enabled under NetScaler Gateway à Global Settings à Change Authentication AAA settings.

NOTE: This setting is disabled by default, because it might reveal to much information to malicious hackers which try to do a brute force attack, to get information on which users are enabled and not.

It is also important that the aaad.debug command lists out different error codes when there is a failed authentication attempt.

For instance, if a user with a disabled account tries to authenticate.

Send reject with code Rejecting with error code 4011

Citrix has made a list which describes all these error codes and the meaning of them.

4001 Invalid credentials. Catch-all error from previous versions.

4002 Login not permitted. Catch-all error from previous versions.

4003 Server timeout

4004 System error

4005 Socket error talking to authentication server

4006 Bad (format) user passed to nsaaad

4007 Bad (format) password passed to nsaaad

4008 Password mismatch (when entering new password)

4009 User not found

4010 Restricted login hours

4011 Account disabled

4012 Password expired

4013 No dial-in permission (RADIUS specific)

4014 Error changing password

4015 Account locked

Now if a user tries to authenticate but is not bound to an authentication policy, for instance if we have multiple authentication policy for different groups, network segments and someone which fall outside of those policies try to authenticate they are presented with this error message.

The simplest way to fix this is to either define ns_true authentication policy which handles all other authentication attempts.

Now if an end-user tries to authenticate to start a Citrix Receiver session and is presented with this error message

This is typically the case if there is a session policy bound to the user which has a default authorization policy of DENY, this might be intended but if not, we should change it to ALLOW.

Next-generation Application Delivery Controllers?

So been involved with some rather existing project as of late, I got a bit caught up on how vendors and consultants think about the ADC market and seing alot of new trends that are emering, I think its time that the ADC vendors start looking in another direction.

Gartner uses this term to describe ADC:
(ADC) are deployed in data centers to optimize application performance, security and resource efficiency by offloading servers, providing deep payload inspection and making the best use of complex protocols. Originally deployed for externally-facing Web applications, they are now used to deliver services for many types of business applications and protocols. Recent developments in software-based and virtual ADC platforms provide more deployment flexibility, especially in cloud services and virtual environments.

So is this accurate anymore? Most people think that ADC is basically a load balancing + some extra shiny features, and to a extent I agree.

But their main purpose isn’t load balancing, it is Application Delivery!

If you think about it, this is what we have been using Citrix/VMware/Microsoft now for many years to do Windows application based delivery, but with the rise of web applications in the enterprises and with more and more enterprises moving to cloud/hybrid based solution, an ADC solution will become more and more important over the next couple of years!

Now what I would like to see in the Next-generation Application Delivery Controller solution?

  • Native virtualization support (I mean just not support for running an appliance on a hypervisor, but being able to interact with it! Looking at the service which are running automatic setup and load balancing of services. Looking at external services and setting up Application Firewall for instance! and also use of NFV should allow customers to virtualization more of the workloads and no longer need a physical device
  • Cloud Integration (Hybrid IT/Cloud is coming, many are already there and more are coming, the ADC should be an central point aggregating application across different solutions, not just the on-premises applications)
  • Identity (Again with the growing list of SaaS applications using identity-solutions, we have SAML, Oauth, WS-federation protocols again mixed with different on-premises application which uses NTLM/Kerberos support the ADC should be able to deliver SSO across different applications on-behalf of the user to ensure that users to do not need to be bothered with different authentication mechanisms. No many would argue that there are identity solutions that should take care of this, but I disagree they should focus on the lifecycle management and let the ADC focus on the SSO mechanism, since it is anyways a network device.
  • Microservices and Web 2.0 Looking into the landscape, Microsoft is pushing hard with Mesosphere, Containers and Microservices which are essentially small web-services, it should be essential that an ADC support and integrates directly with these type of services to ensure that developers can easily provision load balancing features for their services
  • Automation, Automation, Automation feature! REST-API, CLI, PowerShell
  • Insight! This is the holy crown, giving proper insight into how an application is performing, and since an ADC is in most cases the heart between the users and the services running internally on the different servers it has unique insight into how the different applications are running.
  • Security! with the growing list of web applications, we also see an growing list of web exploits, having an ADC which can look at web traffic and being able to detect attacks at layer 7! many are already delivering this on their ADC, but few lack the ability to bind this together with insight as well. What about giving the admins some insight (How secure is actually my service?)
  • Optimization! There is alot of badly written code as well, with the ADC in the heart of the traffic it should be able to rewrite code where it makes sense to ensure optimized connection to the end-user. We should never waste bandwidth which is going out to the end-user, and having comments, whitespaces, unoptimizied images for instance is WASTED bandwidth and having optimization features in place

Maybe I’m hoping for to much, but I already see a trend where some of the vendors are moving, some are aiming for cloud support, some identity and some into security aspect, so it is going to be interesting to see where the larger players are moving.

But anyways, this is my wishlist! What do you think should be a feature on an ADC?

Boom! New free eBook on Citrix NetScaler–NetScaler Gateway Deep Dive!

For those noticing that it has been somewhat quiet for the last couple of weeks on this blog, there is a reason for it! I’ve been quite busy. As mentioned earlier I’ve been in the process of writing an ebook on NetScaler Gateway, and that is for a couple of reasons. Most people use NetScaler for just plain gateway setup and well I did a quick twitter pool and got alot of feedback on email stating that they would love to get more information on those topics.

Now if you want to get this free eBook which is about 130 pages, same procedure as last time register with your email address in this sign up box in the bottom of the post and I will send a link to you!


You are of course free to distribute this ebook if you wish, I only have one wish for this ebook and that is to get some feedback.

  • What’s missing?
  • What’s wrong?
  • What else could I include?

Also would love some feedback if you liked the book, it always makes the writing easier Smilefjes and makes it an even better product eventually, and note this book is aimed as people implementing this types of features and just using it as an reference check when setting up Gateway.

Also thanks to my reviewers and those that came with feedback!

Dave Brett
Daniel Wedel
Carl Stalhood
Carl Beherent
Morten Kallesoe

Now the bigger project is to create multiple ebooks on NetScaler which in the end will make up a whole book. The first project was more of a test to see how it would respond and I have been overwhelmed of the response. So as mentioned this time its NetScaler Gateway, next project is use of NetScaler and AAA, where I intended to focus on

  • ADFS and NetScaler
  • SAML iDP and SP
  • AAA in general
  • Multifactor authentication
  • nFactor
  • Multilevel authentication
  • Integration with Azure and Office365
  • Multi level Active Directory authentication and so on.

And note, before I start on my third project I intend to update my first ebook on Optimization doing more indepth.