Author Archives: msandbu

Veeam endpoint backup free generally available!

Veeam Endpoint backup free, which allows for backup of physical computer and physical servers which I have blogged about earlier https://msandbu.wordpress.com/2015/03/06/veeam-endpoint-backup-and-integration-with-veeam-br/

https://msandbu.wordpress.com/2014/12/01/veeam-endpoint-backup-a-new-free-backup-solution-for-computers-and-physical-servers/

Has now become generally available from Veeam!

http://www.veeam.com/endpoint-backup-free.html

This can also be integrated with existing Veeam repositories which enabled us to do physical backup from Endpoint backup to a Veeam infrastructure!

Optimizing web content with Citrix Netscaler

This post, is based upon a session I had for a partner in Norway. How can we use Netscaler to optimize web content?

Let’s face it, the trends are chaging

* Users are becoming less patient (meaging that they demand that applications/services respond quicker. (more then 40% of users drop out if the website takes mroe then 5 – 10 seconds to load) think about how that can affect a WebShop or eCommerce site ?

* More and more mobile traffic (Mobile phones, ipads, laptops. Communicating using 3G/4G or WLAN for that matter) and to that we can add that there is more data being sent across the network as well. Site web applications become more and more complex, with more code and more components as well.

* More demands to availability (Users are demaing that services are available at almost every hour. If we think about it about 5 – 10 years ago, if something was down for about 10 min we didn’t think that much about it, but now ?

* More demands to have secure communication. It wasn’t that long ago that Facebook and Google switched to SSL as default when using their services. With more and more hacking attempts happening online it requires a certain amount of security.

So what can Netscaler do in this equation ?

* Optimizing content with Front-end optimization, Caching and Compression

With the latest 10.5 release, Citrix has made a good jump into web content optimization. With features like lazy loading of images, HTML comment removal, minify JS and inline CSS.  And adding it that after content is being optimized the content can be compressed using gZIP or DEFLATE and sent across the wire (NOTE: that most web servers like Apache and IIS support GZIP and Deflate but it is much more efficient to do this on a dedicated ADC)

And with using Caching to store often accessed data it makes the Netscaler a good web optimization platform.

* Optimizing based upon endpoints.

With the current trend and more users connecting using mobile devices which are using the internett with a wireless conenction. If needs a better way to communicate as well. A god example here is TCP congeston. On a wireless you have a higher amount of packet loss and this requires using for instance TCP Congestion Westwood which is much better suites on wireless connections. Also using features like MTCP (on supported devices) allows for higher troughput as well. And that we can place different TCP settings on different services makes it much more agile.

* High availability

Using features like load balancing and GSLB allows us to deliver a high availabilty and scale solution. And using features like AppQOE to allows us to prioritize traffic in a eCommerce setting might be a valuable asset. Think the scenario if we have a web shop, where most of our buying customers come from a regular PC while most mobile users that are connecting are mostly checking the latest offers. If we ever where to reach our peak in traffic it is useful to prioritize traffic based upon endpoint connecting.

* Secure content

With Netscaler it allows us to create specific SSL profile which we can attack to different services. For instance older applications which are used by everyone might not have the high requirement regarding security, but on the other hand PCI-DSS requires a high level of security. Add to the mix that we can handle many common DDoS attacks on TCP level and on HTTP. We can also use Application firewall which handles many application based attacks, when an own learning feature it can block users which are not following the common user pattern on a website. And that we can specify common URLs which users are not allowed to access.

So to summerize, the Netscaler can be a good component to optimizing and securing traffic, with a lot of exiting stuff happening in the next year! Smilefjes stay tuned.

Setting up a secure XenApp enviroment–Storefront

So this is part two of my securing XenApp enviroment, this time I’ve moved my focus to Storefront. Now how does Storefront need to be secured ?

In most cases, Storefront is the aggregator that allows clients to connect to a citrix infrastructure. Im most cases the Storefront is located on the internal network and the Netscaler is placed in DMZ. Even if Storefront is located on the internal network and the firewall and Netscaler does alot of the security work, there are still things that need to be take care of on the Storefront.

In many cases many users also connect to the Storefront directly if they are connected to the internal network. Then they are just bypassing the Netscaler. But since Storefront is a Windows Server there are alot of things to think about.

So where to begin.

1: Setting up a base URL with a HTTPS certificate (if you are using a internal signed certificate make sure that you have a proper set up Root CA which in most cases should be offline. Or that you have a public signed third party CA. Which also in many cases is useful because if users are connecting directly to Storefront their computers might not regonize the internally signed CA.

image

2: Remove the HTTP binding on the IIS site. To avoid HTTP requests.

Use a tool like IIS crypto to disable the use of older SSL protocols on IIS server and older RC ciphers

image

You can also define ICA file signing. This allows for Citrix Receiver clients which support signed ICA files to verify that the ICA fiels they get comes from a verified source.  http://support.citrix.com/proddocs/topic/dws-storefront-25/dws-configure-conf-ica.html

3: We can also setup so that Citrix Receiver is unable to caching password, this can be done by changing authenticate.aspx under C:\inetpub\wwwroot\Citrix\Authentication\Views\ExplicitForms\

and you change the following parameter

<% Html.RenderPartial(«SaveCredentialsRequirement»,
              SaveCredentials); %>

<%– Html.RenderPartial(«SaveCredentialsRequirement»,
                SaveCredentials); –%>

4: Force ICA connections to go trough Netscaler using Optimal Gateway feature of Storefront –> http://support.citrix.com/article/CTX200129 using this option will also allow you to use Insight to monitor clients connection to Citrix as well, and depending on the Netscaler version give you some historical data.

And with using Windows pass-trough you can have Kerberos authenticating to the Storefront and then have ICA sessions go trough the Netscaler –> http://support.citrix.com/article/CTX133982

5: Use SSL in communication with the delivery controllers –> http://support.citrix.com/proddocs/topic/xendesktop-7/cds-mng-cntrlr-ssl.html

6: Install Dynamic IP restrictions on the IIS server, this stops DDoS happning against Storefront from the same IP-address

 IIS fig4

7: Windows updated!  and Antivirus software running (Note that having Windows updated, having some sort of antivirus running with limited access to the server) also let the Windows Firewall keep runnign and only open the necessery ports to allow communication with AD, Delivery Controllers and with Netscaler.

8: Define audit policies to log (Credential validation, Remote Desktop connections, terminal logons and so on) https://technet.microsoft.com/en-us/library/dn319056.aspx

9: Use the Storefront Web Config GUI from Citrix to define lockout and session timeout values

image

10: Use a tool like Operations Manager with for instance ComTrade to monitor the Storefront Instances. Or just the IIS management pack for IIS, this gives some good insight on how the IIS server is operating.

11: Make sure that full logging is enabled on the IIS server site.

IIS Logging Configuration for System Center Advisor Log Management

Stay tuned for more, next part is the delivery controllers and the VDA agents.

Netscaler and Office365 SAML iDP setup

With Netscaler 10.5, Citrix announced the support for SAML Identity Provider on the Netscaler feature. That basically meant that we could in theory use the Netscaler as an identity provider for Office365 / Azure AD. Now I have been trying to reverse engineering the setup since Citrix hasen’t created any documentation regarding the setup.

But now! Citrix recently announced the setup of Netscaler iDP setup for Office365 setup http://support.citrix.com/article/CTX200818

Yay!

on another part Citrix also released a new build of Netscaler VPX (build 56.12) which fixes the CPU utilization bug on Vmware you can see more about the release note here –> http://support.citrix.com/article/CTX200818

And there is also a new PCI DSS report which shows compliance for version 3.

Azure Site Recovery Preview setup for Vmware

So a couple of days ago, Microsoft announced the preview for site recovery for physical and Vmware servers. Luckily enough I was able to get access to the preview pretty early. Now for those who don’t know the site recovery feature is built upon the InMage Scout suite that Microsoft purchased a while back. About 6 months back, Microsoft annouced the Migration Accelerator suite which was the first Microsoft branding of InMage but now they have built it into the Microsoft Azure portal, but the architecture is still the same. So this blog will explain how the the different components operate and how it works and how to set it up.

Now there are three different components for a on-premise to Azure replication of virtual machines. There is the

* Configuration Server (Which is this case is Azure VM which is used for centralized management)

* Master Target (User as a repository and for retention, recives the replicated data)

* Process Server (This is the on-premise server which actually does the data moving. It caches data, compresses and encrypts it using a passphrase we create and moves the data to the master target which in turn moves it to Azure.

Now when connecting this to a on-premise site the Process Server will push install the InMage agent on every virtual  machines that it want to protect. The InMage agent will then do a VSS snapshot and move the data to the Process Server which will in turn replicate the data to the master target.

So when you get access to the preview, create a new site recovery vault

image

In the dashboard you now have the option to choose between On-premise site with Vmware and Physical computer to Azure

image

First we have to deploy the configuration server which the managment plane in Azure. So if we click Deploy Configuration Server this wizard will appear which has a custom image which is uses to deploy a Configuration Server

image

This will automatically create an A3 instance, running a custom image (note it might take some time before in appers in the virtual machine pane in Azure)  You can look in the jobs pane of the recovery vault what the status is

image

When it is done you can go into the virtual machine pane and connect to the Configuration Manager server using RDP. When in the virtual machine run the setup which is located on the desktop

image

When setting up the Confguration Manager component it requires the vault registration key (Which is downloadable from the Site Recovery dashboard)

image

Note when the configuration manager server component is finished innstalling it wil present you with a passphrase. COPY IT!! Since you will use it to connect the other components.

image

Now when this is done the server should appear in the Site Recovery under servers as a configuration manager server

image

Next we need to deploy a master target server. This will also deploy in Azure (and will be a A4 machine with a lot of disk capaticy

image

(The virtual machine will have an R: drive where it stores retention data) it is about 1TB large.

The same goes here, it will generate a virtual machine which will eventually appear in the virtual machine pane in Azure, when it is done connect to it using RDP, it will start a presetup which will generate a certificate which allows for the Process serer to connect to it using HTTPS

image

Then when running the wizard it will ask for the IP-address (internal on the same vNet) for the configuration manager server and the passphrase. In my case I had the configuration manager server on 10.0.0.10 and the master server on 10.0.0.15. After the master server is finished deployed take note of the VIP and the endpoints which are attached to it.

image

Now that we are done with the Azure parts of it we need to install a process server. Download the bits from the azure dashboard and install it on a Windows Server (which has access to vCenter)

image

Enter the VIP of the Cloud service and don’t change the port. Also we need to enter the passphrase which was generated on the Configuration Manager server.

Now after the installastion is complete it will ask you to download the Vmware CLI binares from Vmware

image

Now this is for 5.1 (but I tested it against a vSphere 5.5 vcenter and it worked fine) the only pieces it uses the CLI binaries for are to discover virtual machines on vCenter. Rest of the job is using agents on the virtual machines.

Now that we are done with the seperate components they should appear in the Azure portal. Go into the recovery vault, servers –> Configuration manager server and click on it and properties.

image

Now we should see that the different servers are working. image

Next we need to add a vCenter server from the server dashboard.

image

Add the credentials and IP-adress and choose what Process Server is to be used to connect to the on-premise vCenter server.

After that is done and the vCenter appears under servers and connected you can create a protection group (and then we add virtual machines to it)

image

image

Specify the thresholds and retention time for the virtual machines that are going to be in the protection group.

image

Next we we need to add virtual machines to the group

image

Then choose from vCenter what virtual machines to want to protect

image

Then you need to specify which resources are going to be used to repllicate the target VM to Azure

image

And of course administrator credentials to remote push the InMage mobility agent to the VM

image

After that the replication will begin

image

image

And you can see that on the virtual machine that the InMage agent is being installed.

image

And note that the replication might take some time depending on the type of bandwidth available.

Setting up Microsoft Azure and Iaas Backup

Earlier today Microsoft announced the long awaited feature which allows us to take backup of virtual machines directly in Azure. Now before today Microsoft didn’t have any solution to do backup of a VM unless doing a blob snapshot or some third party solution. You can read more about it here –> http://azure.microsoft.com/blog/2015/03/26/azure-backup-announcing-support-for-backup-of-azure-iaas-vms/

The IaaS backup feature is part of Azure Vault, and is pretty easy to setup. Important to note that enabling the backup feature requires that Azure installs an guest-agent in the VM (So therefore they require to be online during the registration process) and note that this is PR region.

So now that when we create a backup vault we get the new preview features. (Firstly we can also create storage replication policies)

1

Now in order to setup a backup rutine we first need to setup a policy, which define when to take backup.

2

Next head on over to the dashboard, first the backup vault needs to detect which virtual machiens it can protect (so click Discover)

3

So it find the two virtual machines which are part of the same sub and in the same region.

4

NOTE: If one of your virtual machines are offline during the process the registration job fails (so don’t select VMs that are offline or just turn them on) Now after the item has been registrered to the vault I can see it under protected items in the backup vault

 

6

Now after this is setup I can see under jobs what VMs that are covered by the policy

7

So when I force start a backup job I can see the progress under the jobs pane

7

I can also click on the JOB and see what is happening.

9

So for this virtual machine which is a plain vanilla OS image took about 22 min, and doing a new backup 1 hour later took about the same amount of time, looks like there is not a incremental backup.

image

So when doing a restore I can choose from the different recovery points

image

And I can define where to restore a virtual machine to a new cloud service or back to its original VM

image

Setting up a secure XenApp enviroment– Netscaler

Now I had the pleasure of talking PCI-DSS compliant XenApp enviroment for a customer. Now after working with it for the last couple of days there are lot of usefull information that I thought I would share.

Now PCI-DSS compliance is needed for any merchant who accepts credit cards for instance an e-commerce size. Or using some sort of application. So this includes all sorts of

* Different procedures for data shredding and logging

* Access control

* Logging and authorization

Now the current PCI-DSS standard is in version 3 –> https://www.pcisecuritystandards.org/documents/PCI_DSS_v3.pdf

The different requirements and assesment procedures can be found in this document. Now Citrix has also created a document for how to setup a compliant XenApp enviroment https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/pci-dss-success-achieving-compliance-and-increasing-web-application-availability.pdf you can also find some more information here –> http://www.citrix.com/about/legal/security-compliance/security-standards.html

Now instead of making this post a pure PCI-DSS post I decided to do a more “howto secure yout XenApp enviroment” and what kind of options we have and where a weakness might be.

Now a typical enviroment might looks like this.

image

So let’s start by exploring the first part of the Citrix infrastructure which is the Netscaler, in a typical enviroment it might be located in the DMZ. Where the front-end firewall has statefull packet inspection to see what traffic goes back and forth. The best way to do a secure setup of Netscaler is one-armed mode and use routing to backend resources and then have another Firewall in between to do deep packet inspection.

First thing we need to do with Netscaler when setting up Netscaler Gateway for instance is to disable SSL 3.0 and default (We should have MPX do to TLS 1.1 and TLS 1.2 but with VPX we are limited to TLS 1.0

Also important to remember th use TRUSTED third party certificates from known vendors, without any known history. Try to avoid SHA-1 based certificates, Citrix now supports SHA256.

Important to setup secure access to management only (since it by default uses http)

image

This can be done by using SSL profiles which can be attached to the Netscaler Gateway

image

Also define NONSECURE SSL renegotiation. Also we need to define some TCP parameters. Firstly make sure that TCP SYN Cookie is enabled, this allows for protection against SYN flood attacks and that SYN Spoof Protection is enabled to protect against spoofed SYN packets.

image

Under HTTP profiles make sure that the Netscaler drops invalid HTTP requests

image

Make sure that ICA proxy migration is enabled, this makes sure that there is only 1 session at a time established for a user via the Netscaler

image

Double hop can also be an option if we have multiple DMZ sones or a private and internal zone.

Specify a max login attempts and a timeout value, to make sure that your services aren’t being hammered by a dictonary attack

image

Change the password for the nsuser!!!

image

Use an encrypted NTP source which allows for timestamping when logging. (Running at version 4 and above) and also verify that the timezones are running correctly.

image

Sett up a SNMP monitoring based solution or Command Center to get monitoring information from Netscaler, or use a Syslog as well to get more detailed information. Note that you should use SNMP v3 which gives both Authentication and encryption.

Use LDAPS based authetication against the local active directory server, since LDAP is pure-text based, and use TLS not SSL, and make sure that the Netscaler verifies the server certificate on the LDAP server

image

It also helps to setup two-factor authentication to provide better protection against user thefts. Make sure that if you are using a two factor authentication vendor that it uses CHAP authentication protocol instead of PAP. Since CHAP is much more secure authentication protocol then PAP

Use NetProfiles to control traffic flow from a particular SNIP to backend resources (This allows for easier management when setting up firewall rules for Access.

image

Enable ARP spoof validation, so we don’t have any forging ARP requests where the Netscaler is placed (DMZ Zone)

image

Use a DNSSEC based DNS server, this allows for signed and validated responses. This way you cannot its difficult to hijack a DNS or do MITM on DNS queries.  Note that this requires that you add a nameserver with both TCP and UDP enabled. (Netscaler can function as both a DNSSEC enabled authoritative DNS server and proxy mode for DNSSEC)

If you wish to use Netscaler as an VPN access towards the first DMZ zone, first things you need to do is

1: Update the SWOT library

image

Create a preauthetnication policy to check for updated antivirus software

image

Same goes for Patch updates

image

In most cases try to use the latest firmware, Citrix does release a new Netscaler firmware atleast one every three months which contains bug fixes and security patches as well.

Do not activate enhanced authentication feedback, this enabled hackers to learn more about lockout policies and or if the user is non existant or locked out, disabled and so on.

image

Set up STA communication using HTTPS (Which requires a valid certificate and that Netscaler trusts the root CA) You also need to setup Storefront using a valid certificate from a trusted Root CA. This should not be a internal PKI root CA since third party vendors have a much higher form a physical security.

If you for some reason cannot use SSL/TLS based communication with backend resources you can use MACSec which is a layer 2 feature which allows for encrypted traffic between nodes on ethernet.

Azure AD Connect Preview 2 is available

As I’ve mentioned previously, looks like the Azure AD time is running on speed or Red Bull, anyways they are active! today they announced a new preview of their universal tool Azure AD Connect (Which is going to replace DirSync and AAD Sync)

So there are alot of new features in preview in this new Azure AD Connect like.

* User writeback

* Group writeback

* Device writeback

* Device Sync

* Directory extension attribute sync

So this means that there are more ways to deploy two-ways sync. Also it makes it easier for hosting providers to do onboarding for existing cloud partners to their existing to their on-premise Active Directory.

Now in order to use these features we need to do some changes to our active directory on-premise.

image

You can that the device and group writeback options are disabled until we run the PowerShell wizards.

First we need to locate the AdSyncADPrep module which are located under C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep

Then import the module Import-Module «C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncAdPrep.psm1

First to allow sync of Windows 10 devices which are joined to the local Active Directory

Initialize-ADSyncDomainJoinedComputerSync -ForestName contoso.com -AdConnectorAccount $psCreds -AzureADCredentials $azureAdCreds

AdConnectorAccount (Local active directory username and password)

AzureADcredentials (Azure AD username and password)

Then we need to define the writeback rule for those who are defined in Azure AD and define writeback

Initialize-ADSyncDeviceWriteBack -DomainName region.contoso.com -AdConnectorAccount $

Then for user-writeback to local active directory

Initialize-ADSyncUserWriteBack -AdConnectorAccount $psCreds -UserWriteBackContainerDN «OU=CloudUsers,DC=contoso,DC=com

Where the OU defines where the Azure AD users are going to be created in the local Active Directory. We can also define writeback in the wizard

image

General purpose Windows Storage spaces server

So after someones request I decided to write a blogpost about thisSmilefjes  We needed a new storage server in our lab enviroment. Now we could have bought a all purpose SAN or NAS, but we decided to use regular Windows Server features with Storage Spaces, why? Because we needed something that supported our protocol needs (iSCSI, SMB3 and NFS 4) and Microsoft is putting alot of effort into Storage spaces and with the features that are coming in vNext it becomes even more awesome!

So we specced a Dell R730 with alot of SAS disks and setup storage spaces with mirroring/striping so we had 4 disk for each pool and 10 GB NIC for each resource.

So after we setup each storage pool, we setup a virtual disk. One intended for iSCSI (Vmware) and the other Intended for NFS (XenServer) lastly we had one two-disk mirror which was setup for (SMB 3.0) so since this is a lab enviroment it was mainly for setting up virtual machines.

Everything works like a charm, one part that was a bit cumbersome was the NFS setup for XenServer it requires access by UID/GUID

image

The performance is what you would expect from two-way striping set on SAS 10k drives. (column size set to 2 and interleave is 64kb)

image

Since we don’t have any SSD disks in our setup we don’t get the benefit of tiering and therefore have a higher latency since we don’t have a storage controller cache and so on.

Now for Vmware we just setup PernixData FVP infront of our virtual machines running on ESX, that gives us the performance benefit but still gives ut the storage spaces that the SAS drivers provide.

Now that’s a hybrid approach Smilefjes

Regular VM vs Pernix FVP write back vs FVP write trough

So since I’ve been working on FVP the last couple of days, I decided to go a simple test. First of I have one VM which is not accelerated by anyway, then I move it to a write trough mode (meaning that writes are not accelerated since Pernix needs to maintain writes to the datastore. Last I decided to setup a Write back (meaning that writes are stored on the cache and then back to the datastore.

So just a regular file benchmarking test….

image

Now using write back we can see that IOPS on writes are close to the same as it was before, but the reads are accelerated by the cache

image

This was the test using Write back, on creating a file on 2 GB I was closely to 6000 IOPS 4K.

image

The problem with write back is that writes are stored on the ram cache in this case, but Pernix has a feature called Fault tolerant write back, meaning that all writes are replicated to another host in the cluster.

And note you can use Add-PrnxVirtualMachineToFVPCluster -FVPCluster pernix -Name felles-sf -NumWBPeers 1 –WriteBack

to move a virtual machine to a writeback cluster.

Følg meg

Få nye innlegg levert til din innboks.

Bli med 61 andre følgere