Citrix XenMobile and Microsoft Cloud happily ever after ?

There is no denying that Microsoft is moving more and more focus into their cloud offerings, even with solution such as Office365, EMS (Enterprise Mobility Suite) and of course their Azure platform.

EMS being the latest product bundle in the suite gives customers Intune, Azure Rights Management and Azure Active Directory Premium. So if a customer already has Office365 (their users are already placed with Azure AD and can then easily be attached to EMS for more features)

We are also seeing that Microsoft is adding more and more management capabilities against Office365 into their Intune suite (Which is one of the keypoints which no other vendors have yet) but is this type of management something we need ? or is it just to give it a “key” selling point?

Now Microsoft has added alot of MDM capabilities to Intune, but they are nowhere close to the competition yet. Of course they have other offerings in the EMS pack, like Azure Rights Management, which are quite unique on the way it functions and integrates with Azure AD and Office365. As of 2014 Microsoft isn’t even listed on the Gartner quadrant for EMM (which they stated would be the goal for 2015)

But it will be interesting to se if Microsoft’s strategy is to compete head-to-head on the other vendors or if they wish to give the basic features and dvelve more into the part of Azure AD and identity management across clouds and SaaS offerings.

Citrix on the other hand, have their XenMobile offering which is a more complete EMM product suite (MDM and MAM, Follow me data with Sharefile, and so on) Now Citrix has a lot of advantages for instance over using Sharefile against OneDrive.  Sharefile has encryption of data even thou it is locally and running on a sandboxed application( on a mobile device), while the only option that OneDrive has is using as a part of Rights Management Service (of course OneDrive has extensive data encryption in-transit and at rest https://technet.microsoft.com/en-us/library/dn905447.aspx

Citrix also has MicroVPN functionality and secure browser access running VPN access using Netscaler, while Microsoft also has a secure browser application which is much more limited to restricting which URLs to open and what content can be viewed from that browser.

So from a customer side you need to ask yourself.

  • what kind of requirement does my buisness have?
  • Do I use Office365 or a regualr on-premise setup?
  • Do I need the advanced capabilities ?
  • How are my users actually working ?

Is there a best of both worlds using both of these technologies ?

While yes!

Now of course there are some feature that overlaps using Offic365 and EMS + XenMobile, but there are also some features which are important to be aware of.

* Citrix has Sharefile storage controller templates in Azure (Meaning that if a customer has an IaaS in Azure, they can setup a Sharefile connector in Azure and use that to publish files and content without using OneDrive)

* Citrix has a Sharefile connector to Office365 (Which allows users to use Sharefile almost as a file aggregrator for communicating between Office365 and their regular fileservers) which allows for secure editing directly from ShareFile.

* Citrix XenMobile has alot better MDM features for Windows Phone that Intune has at the moment.

* Azure AAD has a lot of built-in SSO access to many of Citrix web based applications (Sharefile, GTM, GTA and so on) since users are already in Azure AD premium it can be used to grant access to the different applications using SSO)

* Netscaler and SAML iDP (If we have an on-premise enterprise solution we can use the Netscaler to operate as an SAML identity provider against Office365 which allows for replacement for ADFS which is required for full SSO of on-premise AD users to Office365

* Office365 ProPlus with Lync is supported on XenApp/XD with Lync optimization pack (Note that this is not part of XenMobile but of Workspace suite)

* Netscaler and Azure MFA (We can use Azure MFA against Netscaler to publish web based applications with traffic optimization)

* Netscaler will also soon be available in Azure which allows for setting up a full Citrix infrastructure in Azure

But in the future I would be guessing that Microsoft is moving forward with the user collaboration part, it is going to become the heart of identity management with Azure AD directory and rights management, while Citrix on the other hand will focus more and enabling mobility using solutions like EMM ( MAM ) and follow me data aggregator and secure file access and devices. Citrix will also play an important part in hybrid setup using Netscaler with Cloud bridge and as an identity provider on-premise

Building up a Veeam Cloud Connect infrastructure in Azure

Now before I start, I have already been blogging about settings up Veeam Cloud Connect in Auzre https://msandbu.wordpress.com/2014/11/12/veeam-cloud-connect-for-microsoft-azure-walkthrough/

And its important to remember the Veeam Cloud Connect is only available for Veeam Service Providers (or VCP Veeam Cloud Providers)

This is more of a technical overview of the solution.

image

On-premise Veeam customers which have version 8 (should also have patch 1 installed) Can add a service provider from their console, this can be a IaaS solution running in Azure.

End customers are given a usage quota on the cloud repositories. This shows how much data they can store on their cloud repostitory.

So how to setup this in Azure ?

  • Use either the template from Veeam which is in the Azure Marketplace (NOTE: This requires a paid subscribtion in order to be activated)
  • Download the BITS and install it ourselves.

Now when setting this up in Azure there are a few things to take notice of.

Firstly always check of where the closest datacenter to the customers are, you can use this third-party website as a reference –> http://www.azurespeed.com/

The first two virtual machines are used as a cloud gateway proxy. They will handle the incoming data but not store the data. Important things to take note of here is the bandwidth requirements depending on how many customers, since they operate as a proxy I would try to keep them as cheap as possible. So if we look at the A-instance virtual machines

image

A2 gives us 200 Mbps bandwidth and should be adequate for Gateway proxy performance. On a side note here, A instances do not have SSD drives, so if we want to setup customers using WAN acceleration we should use the D-series (Which has SSD enabled drivers on the D:\ partition) Which gives it a good boost on doing the digest work of comparing blocks. (Ref blogpost IOPS performance in Azure –> https://msandbu.wordpress.com/2013/07/16/azure-and-iops-performance/)

image

There are also some other limits that need to be taken in account. First of when planning for repositories. Data disks in Azure only support up to 1 TB pr disk, meaning that if you need to store data over 1 TB you need to setup Storage spaces running across many drives (Note that storage spaces and geo-replication are not supported)

Also there is a cap for 500 IOPS or data disk, this can be increased a bit by using storage spaces as well. For a regular A4 instance (there as maximum of 16 data disks) look at this reference sheet https://msdn.microsoft.com/en-us/library/azure/dn197896.aspx there is higher amounts of IOPS for D and G-series. Also allows for higher amounts of stored data.

Then you might think (well thats not much data? a maximum amount of 32 TB) important to note that this is not a replacement for on-premise backup. And that moving 32 TB of data from Azure during an outage back on-premice might restrict itself because of the internet bandwidth available at the customer. Just for info, moving 1000 GB over 100 MBps link requires 23 hours… (If your customers require more data and better bandwidth and lower latency, well Azure is not the right solution Smilefjes

Lastly its important to setup load balancing for our cloud gateways. Now the cloud gateways already have built-in load balancing, and will redirect internally based upon traffic. What we need is to load balance the initial request to the Cloud Gateway, since after the first connection, Veeam will keep a list of the availabe cloud gateways.

Now there are two ways to do this using Azure. Either we can use regular DNS based round robin, this means that we have multiple A-records for the same FQDN. When Veeam connects it is able to download all the A-records and try them one after one. Problem with DNS round is that it has no option to check health, and therefore it might take more time.

We can also use Traffic Manager (Which is Azure Load balancing) which has the ability to do health probes to check if they are alive or not. The negativ of this is that when a DNS request is make to our Traffic Manager DNS alias it will only respond with one IP-address & FQDN.

Setting up traffic manager in Azure is a pretty simple case, you just setup it up, give it a URL (Which then needs to be attached using CNAME to a FQDN of your choice on your domain.

image 

And note that this requires that we have multiple cloud services (Which again have their own public IP address)

image

Now the monitoring part here is a bit tricky, since it by default uses HTTP GET commands to verify the existence of a server. Either using HTTP or HTTPS, which require installation of IIS and then setup ACL’s on the endpoints to only respond from Microsoft Traffic manager.

The instances running as a cloud gateway need to be put in a availabilty group in order to get SLA from Microsoft. When in a availability group, Microsoft knows they can take one of the roles down in the group when they have maintance, and allowing for the other one to keep running.

The repositories can be customer specific (depending on the size) but should not be placed in a availability group (since there are no options for shared storage in the backend to keep it redundant) if a virtual machine is not placed in a availability group the azure administrator will get a notice 2 weeks before hand, and in most cases it will just cause the virtual machine to reboot once and it will be up and running again.

Trouble with Lync 2013 on Windows 10 tech preview

So suddenly yesterday I was struck with lightning or something, I was about to have a lync meeting when it suddently wouldn’t start. Keept giving me application stopped responding, so I took a quick restart, that didn’t work as well. Even thou it has been working for the last couple of months.

Took a quick look in the event log and then I saw this

image

So ntdll, exports the native windows API. So why did this suddenly stop working ?

Then it got me thinking what has changed the last couple of days on my computer. Updates!

Took a quick look into the different Windows updates that were posted but tried uinstalling them gave me no luck. Then I remembered that I installed a new graphics driver from AMD the other day (Which was created on 6th of february) when I rolled back to the previous driver from january Lync started working again.

image

Publishing internal applications using Azure Active Directory using Application Proxy

So one of the few cool features in Azure Active Directory is the integration for all kind of applications either it be SaaS or internal applications. So it allows us to externally publish applications which are only accessible from the inside. The internal applications are published to the users and are accessable from the application portal. This also gives us the possibility to have an authentication layer infront of all applications using Azure AD.

So let’s go ahead and publish our internal application. Head on over to the appliaction pane in Azure AD and choose New, then choose Publish an application that will be accessiable from outside your network.

5

Next I need to enter the information on the internal application and Authentication layer, it is by default published using an external URL

6

Next I need to give my users access to the application

image

Next I head on over to the application dashboard and choose enable application proxy, then I download the application proxy connector (note: it does not require a public IP adress)

3

The installation of the connector is pretty simple

1

Then login with a Azure AD credential

2

Then it will automatically register with Azure AD tenant, then if on the users try to open the app portal the application will appear

7

So when a user tries to open the application it will communicate using the proxy connector (Notice the URL)

8

Voila, we have just published an internal application using the Azure AD proxy)

Troubleshooting DNS and LDAP connections Netscaler

So this is something I’ve struggeled a bit with in the past, also see it on a couple of forums post on Citrix, and there are as always not so detailed info on how to verify on “WHAT THE HELL IS WRONG WITH THE D*** CONNECTION TO DNS AND LDAP!!!”

So therefore I decided to write this post, since both DNS and LDAP are crucial in adding to the Netscaler.

So lets start with DNS. There are a couple of ways to add DNS on the Netscaler. Either its UDP, TCP or TCP & UDP. Now UDP is the one that is typical used since a default DNS uses UDP, TCP is more for Zone transfers and so on.

So what happens if we add a DNS server using UDP, Well the Netscaler is going to do a ping against the DNS server to see if it is alive (So if ICMP is blocked it will show as DOWN) It will check every 20 seconds to see if it respons on UDP/53. Also imporatant to note that it does use the SNIP address to communicate with the DNS server.

How can we verify that it can do name lookup ? (By default most of the built-in cmdlets like nslookup, dig and so on do not work with Netscaler since it has its own DNS feature built-in, and those cmdlets will only query the local DNS not the external one.

So to test DNS use the command

show dns addRec hostanem

image

So if we switch from UDP to TCP it will try to use TCP Handshake to verify if it is available, but not going to give use the regular DNS query. So what if we cannot reach the DNS server? Using ping from the cmdlet uses NSIP by default

but with ping in Netscaler we can define a source address (Which we can set to be one of the SNIP addresses.)

ping ip-address –S source-address

image

If you make a trace file you can also see that it works as it should.

image

If your SNIP does not have access to the DNS server you need to either define ACLs which allow it to communicate with the DNS server, create a new SNIP which has local access to the DNS server or define a policy based routing which define where the SNIP needs to go to inorder to access the DNS servers.

For instance if I want to setup a specific route for my DNS traffic from my SNIP ( I can setup a PBR) which looks like this (This is a policy route only for ICMP)

image

After I create the PBR I have to run the command apply pbrs

So that took take of DNS, what about LDAP ? When we setup LDAP servers in Netscaler we have the ability to do retrieve attributes button, great! well almost… it uses the endpoint client IP to retrieve attributes (not the NSIP itself) so it by default uses NSIP. So we can use PING to verify network connectivity. We can also use telnet to verify connectivity since telnet originates from the NSIP.

Shell –-> Telnet

open 192.168.60.1 389 (This can try to connect to the LDAP port 389)

image

How can you verify it works ? It says connected, if it stands on Trying…. the port is not available. If you want to can change that the Netscaler uses SNIP instead of NSIP, this can be done by setting up a load balanced AD server role, then point the LDAP authentication policy to that vServer.

Troubleshooting DNS and LDAP connections Netscaler

So this is something I’ve struggeled a bit with in the past, also see it on a couple of forums post on Citrix, and there are as always not so detailed info on how to verify on “WHAT THE HELL IS WRONG WITH THE D*** CONNECTION TO DNS AND LDAP!!!”

So therefore I decided to write this post, since both DNS and LDAP are crucial in adding to the Netscaler.

So lets start with DNS. There are a couple of ways to add DNS on the Netscaler. Either its UDP, TCP or TCP & UDP. Now UDP is the one that is typical used since a default DNS uses UDP, TCP is more for Zone transfers and so on.

So what happens if we add a DNS server using UDP, Well the Netscaler is going to do a ping against the DNS server to see if it is alive (So if ICMP is blocked it will show as DOWN) It will check every 20 seconds to see if it respons on UDP/53. Also imporatant to note that it does use the SNIP address to communicate with the DNS server.

How can we verify that it can do name lookup ? (By default most of the built-in cmdlets like nslookup, dig and so on do not work with Netscaler since it has its own DNS feature built-in, and those cmdlets will only query the local DNS not the external one.

So to test DNS use the command

show dns addRec hostanem

image

So if we switch from UDP to TCP it will try to use TCP Handshake to verify if it is available, but not going to give use the regular DNS query. So what if we cannot reach the DNS server? Using ping from the cmdlet uses NSIP by default

but with ping in Netscaler we can define a source address (Which we can set to be one of the SNIP addresses.)

ping ip-address –S source-address

image

If you make a trace file you can also see that it works as it should.

image

If your SNIP does not have access to the DNS server you need to either define ACLs which allow it to communicate with the DNS server, create a new SNIP which has local access to the DNS server or define a policy based routing which define where the SNIP needs to go to inorder to access the DNS servers.

For instance if I want to setup a specific route for my DNS traffic from my SNIP ( I can setup a PBR) which looks like this (This is a policy route only for ICMP)

image

After I create the PBR I have to run the command apply pbrs

So that took take of DNS, what about LDAP ? When we setup LDAP servers in Netscaler we have the ability to do retrieve attributes button, great! well almost… it uses the endpoint client IP to retrieve attributes (not the NSIP itself) so it by default uses NSIP. So we can use PING to verify network connectivity. We can also use telnet to verify connectivity since telnet originates from the NSIP.

Shell –-> Telnet

open 192.168.60.1 389 (This can try to connect to the LDAP port 389)

image

How can you verify it works ? It says connected, if it stands on Trying…. the port is not available. If you want to can change that the Netscaler uses SNIP instead of NSIP, this can be done by setting up a load balanced AD server role, then point the LDAP authentication policy to that vServer.

How to use AppQoE on Netscaler

So the last couple of days I’ve been doing a bit of research on Netscaler and prioritizing traffic based upon where the endpoint is coming from. This is where AppQoE comes in. AppQoE is just a combination of different roles into one role, HTTPDoS, Priority Queuing, and SureConnect.

So what if we have a vServer which is getting pounded by traffic, so how do we prioritize the traffic ? So in AppQoE we have two things. Policies and Actions.

Let’s say that we want to divide traffic into two priority groups. One which are android based devices and another which are windows phone devices. Android based devices are given high priority and Windows phones are given lower priority. There are four priorities we can define in AppQoE. HIGH, NORMAL, LOW and LOWEST. And the Netscaler will process traffic from top to bottom. Meaning that Android traffic is priorited over Windows Phone based traffic.

So I have an example expression here for android devices.

image

My action looks like this

image

What is does is basically bind HIGH priority traffic sign to my AppQoE policy, so not much work I have to do here. but next I have to create an AppQoE policy to my Windows Phone users.

image

My AppQoE action looks like this. Now important to see that the policy queue depth defines how many connenctions needs to be active before it is moved to LOWEST priority. I also have to define max connections, if there are requests over the maximum amount of connenctions I have the Netscaler display a custom wait page ( I choose NS, because then I can use a custom HTML code on the Netscaler, if I choose ACS I can choose another web server for instance.

image

Now i can attach this policy to a vServer. (NOTE that SureConnect cannot be enabled for a vServer if used with AppQoE)

image

Now stay tuned for how to setup this with HTTP DOS inorder to protect from HTTP attacks as well with AppQoe.

Why was Windows Server vNext set to 2016?

When Microsoft announced that next version of Windows Server was going to be shipped in 2016. I like many others became frustrated… why would they want to wait so long? dammit I want access to all the cool new feature set that they have done so far.

Now I’ve spoken to alof of IT-pros over the last couple of days, and of course we are a selfish race.. We want the tech now! then it got me thinking. Do we want quick releases or longer release cycles?

Well lets go back a bit. Microsoft is now soon ending support for Windows Server 2003 which is gonna hit a lot of buisnesses hard, since many still have servers running 2003. (according to HP is close to 11 million servers –> http://www.channelregister.co.uk/2014/05/02/windows_server_2003_hp/) It is because they (enterprises) are lazy to mgirate? I mean Microsoft has really made alot of tools available to help them in aiding them in the migration process. But In some cases it is not even possible to do a migration (and yes there are some of them) but it also a complicated process and a long way to go for many enterprises.

We can also see that the majority is using Windows Server 2008 on Azure as well

Why is that ? Isn’t Windows Server 2012 better then 2008 ? Doesn’t it have a better feature set as well as more scaleability ?

After I’ve been thinking for a while I belive the problem is a bit two-fold first of. All enterprises have IT-folks which already have to much to do on a day-to-day basis, an migration process takes a long time and therefore many enterprises are stuck on older platforms for a longer time then they want to. (Hence the numbers of 2008 and 2003 deployments) and my other guess is that Microsoft wants to make vNext are more compelling offer by including more new features into the latest and greatest, like why should I buy a car if it’s only a bit faster then my old one ?

But still, i’m looking forward to Windows Server vNext! Stay tuned for Microsoft Ignite! Smilefjes

NIC 2015 is over for now…

Nordic Infrastructure conference for 2015 is now over, boy fun times! had alot of interesting chats with alot of knowledgeable people. Also had the oppurtunity to meet with Ben Armstrong (Program manager for Hyper-V) and gained some insight in where Microsoft is going with Hyper-V

Also had two sessions

1: Azure RemoteApp

2: Virtually delivered High performance 3d graphics

Permalink til innebygd bilde

Also heard rumors that the recordings are coming out shortly so stay tuned. But again a GREAT conference! looking forward to next year.

Using Netscaler to block IP adresses based upon pattern sets and URL responder

Ever wanted a simple way to block pesky IP-adresses which are giving you much unwated traffic on your webservers? Of course there is the possibility to use ACLs but the become cumbersone if we need to add every IP adress to an ACL (They also get unmanageable)

Another option we have is to use pattern sets. Pattern sets are basically an index with different strings which we can then use against an expression to evaluate if they fall within the category or not.

First we need to create the pattern set, under AppExpert –> Pattern Sets (Which is set to include all of those IP-adresses that we don’t want to access our websites.

image

Next we need an expression which has the ability to extract out the strings and evaluate them against a rule. Go into AppExpert –> Expressions –> Advanced Expressions –>

Create a new expression called CIP, where the expression looks like this

image

This will allow us when creating a responder policy to add a string in the expression. Next go into URL responder and create a new policy

image

Now the magic lies within the expression, since we created a custom saved expression we can use that, which basically just says CLIENT_IP_SRC_EQUALS_ANY”(STRING IN THE PATTERN SET nonoIPS) then RESET Connection.

Then we have to bind the policy to either a vServer or globally, and voila. Now we just have to update the pattern set next time we want to block an IP-address. But do not mistake this for an ACL it only block HTTP access.

Følg meg

Få nye innlegg levert til din innboks.

Bli med 58 andre følgere