Setting up a secure XenApp enviroment–Storefront

So this is part two of my securing XenApp enviroment, this time I’ve moved my focus to Storefront. Now how does Storefront need to be secured ?

In most cases, Storefront is the aggregator that allows clients to connect to a citrix infrastructure. Im most cases the Storefront is located on the internal network and the Netscaler is placed in DMZ. Even if Storefront is located on the internal network and the firewall and Netscaler does alot of the security work, there are still things that need to be take care of on the Storefront.

In many cases many users also connect to the Storefront directly if they are connected to the internal network. Then they are just bypassing the Netscaler. But since Storefront is a Windows Server there are alot of things to think about.

So where to begin.

1: Setting up a base URL with a HTTPS certificate (if you are using a internal signed certificate make sure that you have a proper set up Root CA which in most cases should be offline. Or that you have a public signed third party CA. Which also in many cases is useful because if users are connecting directly to Storefront their computers might not regonize the internally signed CA.

image

2: Remove the HTTP binding on the IIS site. To avoid HTTP requests.

Use a tool like IIS crypto to disable the use of older SSL protocols on IIS server and older RC ciphers

image

You can also define ICA file signing. This allows for Citrix Receiver clients which support signed ICA files to verify that the ICA fiels they get comes from a verified source.  http://support.citrix.com/proddocs/topic/dws-storefront-25/dws-configure-conf-ica.html

3: We can also setup so that Citrix Receiver is unable to caching password, this can be done by changing authenticate.aspx under C:\inetpub\wwwroot\Citrix\Authentication\Views\ExplicitForms\

and you change the following parameter

<% Html.RenderPartial(«SaveCredentialsRequirement»,
              SaveCredentials); %>

<%– Html.RenderPartial(«SaveCredentialsRequirement»,
                SaveCredentials); –%>

4: Force ICA connections to go trough Netscaler using Optimal Gateway feature of Storefront –> http://support.citrix.com/article/CTX200129 using this option will also allow you to use Insight to monitor clients connection to Citrix as well, and depending on the Netscaler version give you some historical data.

And with using Windows pass-trough you can have Kerberos authenticating to the Storefront and then have ICA sessions go trough the Netscaler –> http://support.citrix.com/article/CTX133982

5: Use SSL in communication with the delivery controllers –> http://support.citrix.com/proddocs/topic/xendesktop-7/cds-mng-cntrlr-ssl.html

6: Install Dynamic IP restrictions on the IIS server, this stops DDoS happning against Storefront from the same IP-address

 IIS fig4

7: Windows updated!  and Antivirus software running (Note that having Windows updated, having some sort of antivirus running with limited access to the server) also let the Windows Firewall keep runnign and only open the necessery ports to allow communication with AD, Delivery Controllers and with Netscaler.

8: Define audit policies to log (Credential validation, Remote Desktop connections, terminal logons and so on) https://technet.microsoft.com/en-us/library/dn319056.aspx

9: Use the Storefront Web Config GUI from Citrix to define lockout and session timeout values

image

10: Use a tool like Operations Manager with for instance ComTrade to monitor the Storefront Instances. Or just the IIS management pack for IIS, this gives some good insight on how the IIS server is operating.

11: Make sure that full logging is enabled on the IIS server site.

IIS Logging Configuration for System Center Advisor Log Management

Stay tuned for more, next part is the delivery controllers and the VDA agents.

Netscaler and Office365 SAML iDP setup

With Netscaler 10.5, Citrix announced the support for SAML Identity Provider on the Netscaler feature. That basically meant that we could in theory use the Netscaler as an identity provider for Office365 / Azure AD. Now I have been trying to reverse engineering the setup since Citrix hasen’t created any documentation regarding the setup.

But now! Citrix recently announced the setup of Netscaler iDP setup for Office365 setup http://support.citrix.com/article/CTX200818

Yay!

on another part Citrix also released a new build of Netscaler VPX (build 56.12) which fixes the CPU utilization bug on Vmware you can see more about the release note here –> http://support.citrix.com/article/CTX200818

And there is also a new PCI DSS report which shows compliance for version 3.

Azure Site Recovery Preview setup for Vmware

So a couple of days ago, Microsoft announced the preview for site recovery for physical and Vmware servers. Luckily enough I was able to get access to the preview pretty early. Now for those who don’t know the site recovery feature is built upon the InMage Scout suite that Microsoft purchased a while back. About 6 months back, Microsoft annouced the Migration Accelerator suite which was the first Microsoft branding of InMage but now they have built it into the Microsoft Azure portal, but the architecture is still the same. So this blog will explain how the the different components operate and how it works and how to set it up.

Now there are three different components for a on-premise to Azure replication of virtual machines. There is the

* Configuration Server (Which is this case is Azure VM which is used for centralized management)

* Master Target (User as a repository and for retention, recives the replicated data)

* Process Server (This is the on-premise server which actually does the data moving. It caches data, compresses and encrypts it using a passphrase we create and moves the data to the master target which in turn moves it to Azure.

Now when connecting this to a on-premise site the Process Server will push install the InMage agent on every virtual  machines that it want to protect. The InMage agent will then do a VSS snapshot and move the data to the Process Server which will in turn replicate the data to the master target.

So when you get access to the preview, create a new site recovery vault

image

In the dashboard you now have the option to choose between On-premise site with Vmware and Physical computer to Azure

image

First we have to deploy the configuration server which the managment plane in Azure. So if we click Deploy Configuration Server this wizard will appear which has a custom image which is uses to deploy a Configuration Server

image

This will automatically create an A3 instance, running a custom image (note it might take some time before in appers in the virtual machine pane in Azure)  You can look in the jobs pane of the recovery vault what the status is

image

When it is done you can go into the virtual machine pane and connect to the Configuration Manager server using RDP. When in the virtual machine run the setup which is located on the desktop

image

When setting up the Confguration Manager component it requires the vault registration key (Which is downloadable from the Site Recovery dashboard)

image

Note when the configuration manager server component is finished innstalling it wil present you with a passphrase. COPY IT!! Since you will use it to connect the other components.

image

Now when this is done the server should appear in the Site Recovery under servers as a configuration manager server

image

Next we need to deploy a master target server. This will also deploy in Azure (and will be a A4 machine with a lot of disk capaticy

image

(The virtual machine will have an R: drive where it stores retention data) it is about 1TB large.

The same goes here, it will generate a virtual machine which will eventually appear in the virtual machine pane in Azure, when it is done connect to it using RDP, it will start a presetup which will generate a certificate which allows for the Process serer to connect to it using HTTPS

image

Then when running the wizard it will ask for the IP-address (internal on the same vNet) for the configuration manager server and the passphrase. In my case I had the configuration manager server on 10.0.0.10 and the master server on 10.0.0.15. After the master server is finished deployed take note of the VIP and the endpoints which are attached to it.

image

Now that we are done with the Azure parts of it we need to install a process server. Download the bits from the azure dashboard and install it on a Windows Server (which has access to vCenter)

image

Enter the VIP of the Cloud service and don’t change the port. Also we need to enter the passphrase which was generated on the Configuration Manager server.

Now after the installastion is complete it will ask you to download the Vmware CLI binares from Vmware

image

Now this is for 5.1 (but I tested it against a vSphere 5.5 vcenter and it worked fine) the only pieces it uses the CLI binaries for are to discover virtual machines on vCenter. Rest of the job is using agents on the virtual machines.

Now that we are done with the seperate components they should appear in the Azure portal. Go into the recovery vault, servers –> Configuration manager server and click on it and properties.

image

Now we should see that the different servers are working. image

Next we need to add a vCenter server from the server dashboard.

image

Add the credentials and IP-adress and choose what Process Server is to be used to connect to the on-premise vCenter server.

After that is done and the vCenter appears under servers and connected you can create a protection group (and then we add virtual machines to it)

image

image

Specify the thresholds and retention time for the virtual machines that are going to be in the protection group.

image

Next we we need to add virtual machines to the group

image

Then choose from vCenter what virtual machines to want to protect

image

Then you need to specify which resources are going to be used to repllicate the target VM to Azure

image

And of course administrator credentials to remote push the InMage mobility agent to the VM

image

After that the replication will begin

image

image

And you can see that on the virtual machine that the InMage agent is being installed.

image

And note that the replication might take some time depending on the type of bandwidth available.

Setting up Microsoft Azure and Iaas Backup

Earlier today Microsoft announced the long awaited feature which allows us to take backup of virtual machines directly in Azure. Now before today Microsoft didn’t have any solution to do backup of a VM unless doing a blob snapshot or some third party solution. You can read more about it here –> http://azure.microsoft.com/blog/2015/03/26/azure-backup-announcing-support-for-backup-of-azure-iaas-vms/

The IaaS backup feature is part of Azure Vault, and is pretty easy to setup. Important to note that enabling the backup feature requires that Azure installs an guest-agent in the VM (So therefore they require to be online during the registration process) and note that this is PR region.

So now that when we create a backup vault we get the new preview features. (Firstly we can also create storage replication policies)

1

Now in order to setup a backup rutine we first need to setup a policy, which define when to take backup.

2

Next head on over to the dashboard, first the backup vault needs to detect which virtual machiens it can protect (so click Discover)

3

So it find the two virtual machines which are part of the same sub and in the same region.

4

NOTE: If one of your virtual machines are offline during the process the registration job fails (so don’t select VMs that are offline or just turn them on) Now after the item has been registrered to the vault I can see it under protected items in the backup vault

 

6

Now after this is setup I can see under jobs what VMs that are covered by the policy

7

So when I force start a backup job I can see the progress under the jobs pane

7

I can also click on the JOB and see what is happening.

9

So for this virtual machine which is a plain vanilla OS image took about 22 min, and doing a new backup 1 hour later took about the same amount of time, looks like there is not a incremental backup.

image

So when doing a restore I can choose from the different recovery points

image

And I can define where to restore a virtual machine to a new cloud service or back to its original VM

image

Setting up a secure XenApp enviroment– Netscaler

Now I had the pleasure of talking PCI-DSS compliant XenApp enviroment for a customer. Now after working with it for the last couple of days there are lot of usefull information that I thought I would share.

Now PCI-DSS compliance is needed for any merchant who accepts credit cards for instance an e-commerce size. Or using some sort of application. So this includes all sorts of

* Different procedures for data shredding and logging

* Access control

* Logging and authorization

Now the current PCI-DSS standard is in version 3 –> https://www.pcisecuritystandards.org/documents/PCI_DSS_v3.pdf

The different requirements and assesment procedures can be found in this document. Now Citrix has also created a document for how to setup a compliant XenApp enviroment https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/pci-dss-success-achieving-compliance-and-increasing-web-application-availability.pdf you can also find some more information here –> http://www.citrix.com/about/legal/security-compliance/security-standards.html

Now instead of making this post a pure PCI-DSS post I decided to do a more “howto secure yout XenApp enviroment” and what kind of options we have and where a weakness might be.

Now a typical enviroment might looks like this.

image

So let’s start by exploring the first part of the Citrix infrastructure which is the Netscaler, in a typical enviroment it might be located in the DMZ. Where the front-end firewall has statefull packet inspection to see what traffic goes back and forth. The best way to do a secure setup of Netscaler is one-armed mode and use routing to backend resources and then have another Firewall in between to do deep packet inspection.

First thing we need to do with Netscaler when setting up Netscaler Gateway for instance is to disable SSL 3.0 and default (We should have MPX do to TLS 1.1 and TLS 1.2 but with VPX we are limited to TLS 1.0

Also important to remember th use TRUSTED third party certificates from known vendors, without any known history. Try to avoid SHA-1 based certificates, Citrix now supports SHA256.

Important to setup secure access to management only (since it by default uses http)

image

This can be done by using SSL profiles which can be attached to the Netscaler Gateway

image

Also define NONSECURE SSL renegotiation. Also we need to define some TCP parameters. Firstly make sure that TCP SYN Cookie is enabled, this allows for protection against SYN flood attacks and that SYN Spoof Protection is enabled to protect against spoofed SYN packets.

image

Under HTTP profiles make sure that the Netscaler drops invalid HTTP requests

image

Make sure that ICA proxy migration is enabled, this makes sure that there is only 1 session at a time established for a user via the Netscaler

image

Double hop can also be an option if we have multiple DMZ sones or a private and internal zone.

Specify a max login attempts and a timeout value, to make sure that your services aren’t being hammered by a dictonary attack

image

Change the password for the nsuser!!!

image

Use an encrypted NTP source which allows for timestamping when logging. (Running at version 4 and above) and also verify that the timezones are running correctly.

image

Sett up a SNMP monitoring based solution or Command Center to get monitoring information from Netscaler, or use a Syslog as well to get more detailed information. Note that you should use SNMP v3 which gives both Authentication and encryption.

Use LDAPS based authetication against the local active directory server, since LDAP is pure-text based, and use TLS not SSL, and make sure that the Netscaler verifies the server certificate on the LDAP server

image

It also helps to setup two-factor authentication to provide better protection against user thefts. Make sure that if you are using a two factor authentication vendor that it uses CHAP authentication protocol instead of PAP. Since CHAP is much more secure authentication protocol then PAP

Use NetProfiles to control traffic flow from a particular SNIP to backend resources (This allows for easier management when setting up firewall rules for Access.

image

Enable ARP spoof validation, so we don’t have any forging ARP requests where the Netscaler is placed (DMZ Zone)

image

Use a DNSSEC based DNS server, this allows for signed and validated responses. This way you cannot its difficult to hijack a DNS or do MITM on DNS queries.  Note that this requires that you add a nameserver with both TCP and UDP enabled. (Netscaler can function as both a DNSSEC enabled authoritative DNS server and proxy mode for DNSSEC)

If you wish to use Netscaler as an VPN access towards the first DMZ zone, first things you need to do is

1: Update the SWOT library

image

Create a preauthetnication policy to check for updated antivirus software

image

Same goes for Patch updates

image

In most cases try to use the latest firmware, Citrix does release a new Netscaler firmware atleast one every three months which contains bug fixes and security patches as well.

Do not activate enhanced authentication feedback, this enabled hackers to learn more about lockout policies and or if the user is non existant or locked out, disabled and so on.

image

Set up STA communication using HTTPS (Which requires a valid certificate and that Netscaler trusts the root CA) You also need to setup Storefront using a valid certificate from a trusted Root CA. This should not be a internal PKI root CA since third party vendors have a much higher form a physical security.

If you for some reason cannot use SSL/TLS based communication with backend resources you can use MACSec which is a layer 2 feature which allows for encrypted traffic between nodes on ethernet.

Azure AD Connect Preview 2 is available

As I’ve mentioned previously, looks like the Azure AD time is running on speed or Red Bull, anyways they are active! today they announced a new preview of their universal tool Azure AD Connect (Which is going to replace DirSync and AAD Sync)

So there are alot of new features in preview in this new Azure AD Connect like.

* User writeback

* Group writeback

* Device writeback

* Device Sync

* Directory extension attribute sync

So this means that there are more ways to deploy two-ways sync. Also it makes it easier for hosting providers to do onboarding for existing cloud partners to their existing to their on-premise Active Directory.

Now in order to use these features we need to do some changes to our active directory on-premise.

image

You can that the device and group writeback options are disabled until we run the PowerShell wizards.

First we need to locate the AdSyncADPrep module which are located under C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep

Then import the module Import-Module «C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncAdPrep.psm1

First to allow sync of Windows 10 devices which are joined to the local Active Directory

Initialize-ADSyncDomainJoinedComputerSync -ForestName contoso.com -AdConnectorAccount $psCreds -AzureADCredentials $azureAdCreds

AdConnectorAccount (Local active directory username and password)

AzureADcredentials (Azure AD username and password)

Then we need to define the writeback rule for those who are defined in Azure AD and define writeback

Initialize-ADSyncDeviceWriteBack -DomainName region.contoso.com -AdConnectorAccount $

Then for user-writeback to local active directory

Initialize-ADSyncUserWriteBack -AdConnectorAccount $psCreds -UserWriteBackContainerDN «OU=CloudUsers,DC=contoso,DC=com

Where the OU defines where the Azure AD users are going to be created in the local Active Directory. We can also define writeback in the wizard

image

General purpose Windows Storage spaces server

So after someones request I decided to write a blogpost about thisSmilefjes  We needed a new storage server in our lab enviroment. Now we could have bought a all purpose SAN or NAS, but we decided to use regular Windows Server features with Storage Spaces, why? Because we needed something that supported our protocol needs (iSCSI, SMB3 and NFS 4) and Microsoft is putting alot of effort into Storage spaces and with the features that are coming in vNext it becomes even more awesome!

So we specced a Dell R730 with alot of SAS disks and setup storage spaces with mirroring/striping so we had 4 disk for each pool and 10 GB NIC for each resource.

So after we setup each storage pool, we setup a virtual disk. One intended for iSCSI (Vmware) and the other Intended for NFS (XenServer) lastly we had one two-disk mirror which was setup for (SMB 3.0) so since this is a lab enviroment it was mainly for setting up virtual machines.

Everything works like a charm, one part that was a bit cumbersome was the NFS setup for XenServer it requires access by UID/GUID

image

The performance is what you would expect from two-way striping set on SAS 10k drives. (column size set to 2 and interleave is 64kb)

image

Since we don’t have any SSD disks in our setup we don’t get the benefit of tiering and therefore have a higher latency since we don’t have a storage controller cache and so on.

Now for Vmware we just setup PernixData FVP infront of our virtual machines running on ESX, that gives us the performance benefit but still gives ut the storage spaces that the SAS drivers provide.

Now that’s a hybrid approach Smilefjes

Regular VM vs Pernix FVP write back vs FVP write trough

So since I’ve been working on FVP the last couple of days, I decided to go a simple test. First of I have one VM which is not accelerated by anyway, then I move it to a write trough mode (meaning that writes are not accelerated since Pernix needs to maintain writes to the datastore. Last I decided to setup a Write back (meaning that writes are stored on the cache and then back to the datastore.

So just a regular file benchmarking test….

image

Now using write back we can see that IOPS on writes are close to the same as it was before, but the reads are accelerated by the cache

image

This was the test using Write back, on creating a file on 2 GB I was closely to 6000 IOPS 4K.

image

The problem with write back is that writes are stored on the ram cache in this case, but Pernix has a feature called Fault tolerant write back, meaning that all writes are replicated to another host in the cluster.

And note you can use Add-PrnxVirtualMachineToFVPCluster -FVPCluster pernix -Name felles-sf -NumWBPeers 1 –WriteBack

to move a virtual machine to a writeback cluster.

Playing around with Pernixdata and Powershell

Most these days are including some sort of Powershell module for their software, why? becuase it opens up for automation ! Smilefjes

As I mentioned in one of my previous posts I’ve started playing around with PernixData in our lab enviroment and I was reading around on the different KB articles when I saw that “Hey! Pernix has a PowerShell module installed on the management server”

So getting it up and running, the easiest way to take a look around on the different cmdlets is by using ISE on the management server.

Or if you can to do e remote call to it using SMA or Orchestrator you can import the module by adding the path to C:\Windows\system32\WindowsPowerShell\v1.0\Modules\PrnxCli\PrnxCli.dll

Or else its just import-module PrnxCLI

So what can we do ? We can pretty much do anything (I haven’t completly found out how to create a cluster yet, since the cmdlet seems to want some sort of GUID which I am not able to grasp yet) but still I can define resources and add a VM to be accelerated and so on.

So what kind of cmdlets can I use. First of I need to connec to the managemnet server, using a account which has rights to the managment server.

Connect-PrnxServer –Password password –UserName username -NameOrIPAddress localhost

Next I can assign resources to my cluster called pernix (from a particular host) I can also use multiple variables here –host 1,2,3 and so on.

Add-PrnxRAMToFVPCluster -FVPCluster pernix -Host esxi30 -SizeGB 12

Next I assign a virtual machine to the Cluster (Name indicated name of the virtual machine) then the amount of write back peers (If I choose writeback)

Add-PrnxVirtualMachineToFVPCluster -FVPCluster pernix -Name felles-sf -NumWBPeers 1 –WriteBack

Now I can see the acceleration policy being assigned to the virtual machine. Do a format list will give a bit more information about the policy.

Get-PrnxAccelerationPolicy -Name felles-sf | format-list

Now I can also do more showing a stats directly from Powershell, this will list out the stats for the virtual machine the last 5 samples in a grid-view

Get-PrnxObjectStats -ObjectIdentifier felles-sf -NumberOfSamples 5 | Out-GridView

image

Works like a charm!

Citrix Netscaler and AAA authentication across different profiles

So I was helping a partner the other day with a AAA-setup. The issue was that a customer had two different AAA authentication profiles where one profile was using username + password while the other was using two-factor authentication with RADIUS. So what they wanted is that the users were first given access to the first intranet site and then needed to reauthenticate if they needed to get to the second sone with their two-factor authentication

Problem was that this didn’t work as they intended. If a user first was logged in with their username and password they were also given access to the other content that required two-factor authentication.

This is because within Netscaler there is concept called Authentication Level

image

So if you have two differnet authentication profiles which both of them have a authentication level of 1. I woulnd’t matter because all users would then be able to access all content.  So what we needed to do was to assign a authentication level of 1 to the authentication policy using just username and password. Then we assigned a authentication level 2 on the one using two-factor. This is because that a user that is authenticated in granted level 1 cannot access content that requires an authentication level that is higher. So when the users tried to enter the content that needed two-factor authentication they needed to reauthenticate.

Følg meg

Få nye innlegg levert til din innboks.

Bli med 61 andre følgere