Setting up Microsoft Azure and Iaas Backup

Earlier today Microsoft announced the long awaited feature which allows us to take backup of virtual machines directly in Azure. Now before today Microsoft didn’t have any solution to do backup of a VM unless doing a blob snapshot or some third party solution. You can read more about it here –> http://azure.microsoft.com/blog/2015/03/26/azure-backup-announcing-support-for-backup-of-azure-iaas-vms/

The IaaS backup feature is part of Azure Vault, and is pretty easy to setup. Important to note that enabling the backup feature requires that Azure installs an guest-agent in the VM (So therefore they require to be online during the registration process) and note that this is PR region.

So now that when we create a backup vault we get the new preview features. (Firstly we can also create storage replication policies)

1

Now in order to setup a backup rutine we first need to setup a policy, which define when to take backup.

2

Next head on over to the dashboard, first the backup vault needs to detect which virtual machiens it can protect (so click Discover)

3

So it find the two virtual machines which are part of the same sub and in the same region.

4

NOTE: If one of your virtual machines are offline during the process the registration job fails (so don’t select VMs that are offline or just turn them on) Now after the item has been registrered to the vault I can see it under protected items in the backup vault

 

6

Now after this is setup I can see under jobs what VMs that are covered by the policy

7

So when I force start a backup job I can see the progress under the jobs pane

7

I can also click on the JOB and see what is happening.

9

So for this virtual machine which is a plain vanilla OS image took about 22 min, and doing a new backup 1 hour later took about the same amount of time, looks like there is not a incremental backup.

image

So when doing a restore I can choose from the different recovery points

image

And I can define where to restore a virtual machine to a new cloud service or back to its original VM

image

Setting up a secure XenApp enviroment– Netscaler

Now I had the pleasure of talking PCI-DSS compliant XenApp enviroment for a customer. Now after working with it for the last couple of days there are lot of usefull information that I thought I would share.

Now PCI-DSS compliance is needed for any merchant who accepts credit cards for instance an e-commerce size. Or using some sort of application. So this includes all sorts of

* Different procedures for data shredding and logging

* Access control

* Logging and authorization

Now the current PCI-DSS standard is in version 3 –> https://www.pcisecuritystandards.org/documents/PCI_DSS_v3.pdf

The different requirements and assesment procedures can be found in this document. Now Citrix has also created a document for how to setup a compliant XenApp enviroment https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/pci-dss-success-achieving-compliance-and-increasing-web-application-availability.pdf you can also find some more information here –> http://www.citrix.com/about/legal/security-compliance/security-standards.html

Now instead of making this post a pure PCI-DSS post I decided to do a more “howto secure yout XenApp enviroment” and what kind of options we have and where a weakness might be.

Now a typical enviroment might looks like this.

image

So let’s start by exploring the first part of the Citrix infrastructure which is the Netscaler, in a typical enviroment it might be located in the DMZ. Where the front-end firewall has statefull packet inspection to see what traffic goes back and forth. The best way to do a secure setup of Netscaler is one-armed mode and use routing to backend resources and then have another Firewall in between to do deep packet inspection.

First thing we need to do with Netscaler when setting up Netscaler Gateway for instance is to disable SSL 3.0 and default (We should have MPX do to TLS 1.1 and TLS 1.2 but with VPX we are limited to TLS 1.0

Also important to remember th use TRUSTED third party certificates from known vendors, without any known history. Try to avoid SHA-1 based certificates, Citrix now supports SHA256.

Important to setup secure access to management only (since it by default uses http)

image

This can be done by using SSL profiles which can be attached to the Netscaler Gateway

image

Also define NONSECURE SSL renegotiation. Also we need to define some TCP parameters. Firstly make sure that TCP SYN Cookie is enabled, this allows for protection against SYN flood attacks and that SYN Spoof Protection is enabled to protect against spoofed SYN packets.

image

Under HTTP profiles make sure that the Netscaler drops invalid HTTP requests

image

Make sure that ICA proxy migration is enabled, this makes sure that there is only 1 session at a time established for a user via the Netscaler

image

Double hop can also be an option if we have multiple DMZ sones or a private and internal zone.

Specify a max login attempts and a timeout value, to make sure that your services aren’t being hammered by a dictonary attack

image

Change the password for the nsuser!!!

image

Use an encrypted NTP source which allows for timestamping when logging. (Running at version 4 and above) and also verify that the timezones are running correctly.

image

Sett up a SNMP monitoring based solution or Command Center to get monitoring information from Netscaler, or use a Syslog as well to get more detailed information. Note that you should use SNMP v3 which gives both Authentication and encryption.

Use LDAPS based authetication against the local active directory server, since LDAP is pure-text based, and use TLS not SSL, and make sure that the Netscaler verifies the server certificate on the LDAP server

image

It also helps to setup two-factor authentication to provide better protection against user thefts. Make sure that if you are using a two factor authentication vendor that it uses CHAP authentication protocol instead of PAP. Since CHAP is much more secure authentication protocol then PAP

Use NetProfiles to control traffic flow from a particular SNIP to backend resources (This allows for easier management when setting up firewall rules for Access.

image

Enable ARP spoof validation, so we don’t have any forging ARP requests where the Netscaler is placed (DMZ Zone)

image

Use a DNSSEC based DNS server, this allows for signed and validated responses. This way you cannot its difficult to hijack a DNS or do MITM on DNS queries.  Note that this requires that you add a nameserver with both TCP and UDP enabled. (Netscaler can function as both a DNSSEC enabled authoritative DNS server and proxy mode for DNSSEC)

If you wish to use Netscaler as an VPN access towards the first DMZ zone, first things you need to do is

1: Update the SWOT library

image

Create a preauthetnication policy to check for updated antivirus software

image

Same goes for Patch updates

image

In most cases try to use the latest firmware, Citrix does release a new Netscaler firmware atleast one every three months which contains bug fixes and security patches as well.

Do not activate enhanced authentication feedback, this enabled hackers to learn more about lockout policies and or if the user is non existant or locked out, disabled and so on.

image

Set up STA communication using HTTPS (Which requires a valid certificate and that Netscaler trusts the root CA) You also need to setup Storefront using a valid certificate from a trusted Root CA. This should not be a internal PKI root CA since third party vendors have a much higher form a physical security.

If you for some reason cannot use SSL/TLS based communication with backend resources you can use MACSec which is a layer 2 feature which allows for encrypted traffic between nodes on ethernet.

Azure AD Connect Preview 2 is available

As I’ve mentioned previously, looks like the Azure AD time is running on speed or Red Bull, anyways they are active! today they announced a new preview of their universal tool Azure AD Connect (Which is going to replace DirSync and AAD Sync)

So there are alot of new features in preview in this new Azure AD Connect like.

* User writeback

* Group writeback

* Device writeback

* Device Sync

* Directory extension attribute sync

So this means that there are more ways to deploy two-ways sync. Also it makes it easier for hosting providers to do onboarding for existing cloud partners to their existing to their on-premise Active Directory.

Now in order to use these features we need to do some changes to our active directory on-premise.

image

You can that the device and group writeback options are disabled until we run the PowerShell wizards.

First we need to locate the AdSyncADPrep module which are located under C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep

Then import the module Import-Module «C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncAdPrep.psm1

First to allow sync of Windows 10 devices which are joined to the local Active Directory

Initialize-ADSyncDomainJoinedComputerSync -ForestName contoso.com -AdConnectorAccount $psCreds -AzureADCredentials $azureAdCreds

AdConnectorAccount (Local active directory username and password)

AzureADcredentials (Azure AD username and password)

Then we need to define the writeback rule for those who are defined in Azure AD and define writeback

Initialize-ADSyncDeviceWriteBack -DomainName region.contoso.com -AdConnectorAccount $

Then for user-writeback to local active directory

Initialize-ADSyncUserWriteBack -AdConnectorAccount $psCreds -UserWriteBackContainerDN «OU=CloudUsers,DC=contoso,DC=com

Where the OU defines where the Azure AD users are going to be created in the local Active Directory. We can also define writeback in the wizard

image

General purpose Windows Storage spaces server

So after someones request I decided to write a blogpost about thisSmilefjes  We needed a new storage server in our lab enviroment. Now we could have bought a all purpose SAN or NAS, but we decided to use regular Windows Server features with Storage Spaces, why? Because we needed something that supported our protocol needs (iSCSI, SMB3 and NFS 4) and Microsoft is putting alot of effort into Storage spaces and with the features that are coming in vNext it becomes even more awesome!

So we specced a Dell R730 with alot of SAS disks and setup storage spaces with mirroring/striping so we had 4 disk for each pool and 10 GB NIC for each resource.

So after we setup each storage pool, we setup a virtual disk. One intended for iSCSI (Vmware) and the other Intended for NFS (XenServer) lastly we had one two-disk mirror which was setup for (SMB 3.0) so since this is a lab enviroment it was mainly for setting up virtual machines.

Everything works like a charm, one part that was a bit cumbersome was the NFS setup for XenServer it requires access by UID/GUID

image

The performance is what you would expect from two-way striping set on SAS 10k drives. (column size set to 2 and interleave is 64kb)

image

Since we don’t have any SSD disks in our setup we don’t get the benefit of tiering and therefore have a higher latency since we don’t have a storage controller cache and so on.

Now for Vmware we just setup PernixData FVP infront of our virtual machines running on ESX, that gives us the performance benefit but still gives ut the storage spaces that the SAS drivers provide.

Now that’s a hybrid approach Smilefjes

Regular VM vs Pernix FVP write back vs FVP write trough

So since I’ve been working on FVP the last couple of days, I decided to go a simple test. First of I have one VM which is not accelerated by anyway, then I move it to a write trough mode (meaning that writes are not accelerated since Pernix needs to maintain writes to the datastore. Last I decided to setup a Write back (meaning that writes are stored on the cache and then back to the datastore.

So just a regular file benchmarking test….

image

Now using write back we can see that IOPS on writes are close to the same as it was before, but the reads are accelerated by the cache

image

This was the test using Write back, on creating a file on 2 GB I was closely to 6000 IOPS 4K.

image

The problem with write back is that writes are stored on the ram cache in this case, but Pernix has a feature called Fault tolerant write back, meaning that all writes are replicated to another host in the cluster.

And note you can use Add-PrnxVirtualMachineToFVPCluster -FVPCluster pernix -Name felles-sf -NumWBPeers 1 –WriteBack

to move a virtual machine to a writeback cluster.

Playing around with Pernixdata and Powershell

Most these days are including some sort of Powershell module for their software, why? becuase it opens up for automation ! Smilefjes

As I mentioned in one of my previous posts I’ve started playing around with PernixData in our lab enviroment and I was reading around on the different KB articles when I saw that “Hey! Pernix has a PowerShell module installed on the management server”

So getting it up and running, the easiest way to take a look around on the different cmdlets is by using ISE on the management server.

Or if you can to do e remote call to it using SMA or Orchestrator you can import the module by adding the path to C:\Windows\system32\WindowsPowerShell\v1.0\Modules\PrnxCli\PrnxCli.dll

Or else its just import-module PrnxCLI

So what can we do ? We can pretty much do anything (I haven’t completly found out how to create a cluster yet, since the cmdlet seems to want some sort of GUID which I am not able to grasp yet) but still I can define resources and add a VM to be accelerated and so on.

So what kind of cmdlets can I use. First of I need to connec to the managemnet server, using a account which has rights to the managment server.

Connect-PrnxServer –Password password –UserName username -NameOrIPAddress localhost

Next I can assign resources to my cluster called pernix (from a particular host) I can also use multiple variables here –host 1,2,3 and so on.

Add-PrnxRAMToFVPCluster -FVPCluster pernix -Host esxi30 -SizeGB 12

Next I assign a virtual machine to the Cluster (Name indicated name of the virtual machine) then the amount of write back peers (If I choose writeback)

Add-PrnxVirtualMachineToFVPCluster -FVPCluster pernix -Name felles-sf -NumWBPeers 1 –WriteBack

Now I can see the acceleration policy being assigned to the virtual machine. Do a format list will give a bit more information about the policy.

Get-PrnxAccelerationPolicy -Name felles-sf | format-list

Now I can also do more showing a stats directly from Powershell, this will list out the stats for the virtual machine the last 5 samples in a grid-view

Get-PrnxObjectStats -ObjectIdentifier felles-sf -NumberOfSamples 5 | Out-GridView

image

Works like a charm!

Citrix Netscaler and AAA authentication across different profiles

So I was helping a partner the other day with a AAA-setup. The issue was that a customer had two different AAA authentication profiles where one profile was using username + password while the other was using two-factor authentication with RADIUS. So what they wanted is that the users were first given access to the first intranet site and then needed to reauthenticate if they needed to get to the second sone with their two-factor authentication

Problem was that this didn’t work as they intended. If a user first was logged in with their username and password they were also given access to the other content that required two-factor authentication.

This is because within Netscaler there is concept called Authentication Level

image

So if you have two differnet authentication profiles which both of them have a authentication level of 1. I woulnd’t matter because all users would then be able to access all content.  So what we needed to do was to assign a authentication level of 1 to the authentication policy using just username and password. Then we assigned a authentication level 2 on the one using two-factor. This is because that a user that is authenticated in granted level 1 cannot access content that requires an authentication level that is higher. So when the users tried to enter the content that needed two-factor authentication they needed to reauthenticate.

Pernixdata FVP – What does it actually do ?

So this has been on my todo list for a while, but I’ve actually been able to play around with Pernix data. So what does it do ?

It gives us low latency read and writes based upon using server side flash or RAM. So think of it as a storage tier in between the virtual machine and the datastore. Or even a better way using Write Through (Read) and Write Back (Read & Write) caching. (Note when setting up write back caching you have an option to define replication partners so you don’t lose data.

Source: Pernixdata.com

image

So this allows for improved performance while still offloading burst IO from the underlaying datastore (which might be NFS, iSCSI, FC and so on) still its only suppoted on Vmware. The golden part (besides the flash part) is this a pure software solution, they say on their website you use about 10 min to set it up, thats just wrong I used about 7 min max! Smilefjes . So there are two pieces to it to get it up and running (Or 3 actually) first is the host integration software which is basically a vib that needs to be installed on each host. I basically did it using SSH and FTP, of course it is possible to use VUM as well.

esxcli software vib install -d <ZIP file name with full path> —no-sig-check

Next we need to install a management server, which needs to a Windows server with a SQL database to store data. Important to remember that Pernixdata stores about 0,5 MB data pr VM each day so size accordingly.

image

AFter the management server is installed you have to add the plugin to the vCenter, (I like the C# client since I was using 5.5) so when I started vCenter go into Plug-ins –> Manage Plug-ins

24

After the plugins were enabled and active I was able to login to the management console ( remember thou that you need an Vmware cluster for the management to work)

image

To give myself an easy start I want to try out the RAM based FVP cluster on one of the hosts and give it a spin.

So I created a RAM based cluster on one of the hosts (Choose Create Cluster and then add the type you resource you want flash or RAM), so you can decide yourself how much RAM you want allocated to the FVP cluster. (And no you don’t need 40 GB assigned to a FVP cluster, just about be playing around) well get to that part in a bit.

image

Then I choose to enable write-back (which means that the content in the FVP is not directly in sync in what’s on the datastore. Meaning that if my server happend to go down that data would be lost since it is stored in RAM, but again it does give a good write boost since the FVP doesn’t need to wait on the datastore. So I did a quick test before and after adding my virtual machine to the FVP cluster (and without any further tuning, just adding the VM to the cluster) so what is happening underneath is that Pernix becomes a part of the hypervisor kinda like a filter which the VM has to go trough when reading and writing to the datastore.

HD tune test without Pernix enabled

image

Then I activated the VM for write back and added it to the cluster (and yes you can do this on the fly) and note that this VM is stored on NFS storage with SAS drives.

image

So how where my test results now ?

image

Read improved with 20x (well close to it) This test does not show much write information. Lets try some random access (With FVP enabled)

image

(with FVP disabled)

image

Now we can see from the graphs inside vCenter that content is being served from the FVP content. These tests only show a fraction of it and of course would be much more visible in a production SQL or SharePoint for instance. So stay tuned for more!

How to setup Azure Active Directory applications and Office365 dashboard

So this is something I’ve just recently gotten aware of. (Just comes to show that so much news is coming to EMS pack. But this feature is really usefull for customers which has Office365 and Azure Active Directory.

Now if you are familiar with the Azure AD application portal you know that here users can access their applications which we have defined for them. image

Which might be SaaS applications, other Microsoft Azure AD based applications or on-prem apps using Application proxy. That great but the typical user might have Office365 as their start portal (start page) is there any way to show the apps there instead ? indeed!

So inside the application portal of Office365 you have a option here called myapps

image

If users click here they will get the apps available to them

image

And here I can choose to attach the application to the application portal for a user.

image

Great stuff! since this allows the user to have access to all their applications from within Office365.

Howto create a custom RemoteApp image in Microsoft Azure

Finally its here! the ability to remote custom remoteapp images in Microsoft Azure. Before this we had a long process of creating a custom VM locally and sysprepping it and running a powershell command to upload the VHD file containing all our LOB to Azure. Those days are over! Smilefjes

Instead we can use this method to create remoteapp images. Setup a new virtual machine in Azure, choose from Gallery and there choose the “Windows Server Remote Desktop Session Host” VM this is the one that we  use to create our Image.

remoteapp2

Then we provisoing the VM (Note this is automatically setup as an A3 because of the instance size on RemoteApp) Next we can install our applications that we need.

Next we run the ValidateRemoteApp image PowerShell script on the desktop (This will go trough all the prerequisites to setup the image.

image

Then do a sysprep and generalize

Run Sysprep

Then do a capture of the virtual machine so it is stored in the virtual machine library

image

Then we go into RemoteApp, templates and choose Import an image from your virtual machine library.

remoteapp1

image

And we are good to go! Smilefjes

Følg meg

Få nye innlegg levert til din innboks.

Bli med 59 andre følgere