A couple of days ago I was involved in a case where ICA sessions were suddenly disconnected and the users were unable to reconnect. The setup was a simple ICA-proxy access gateway using the latest build (126) and there were no error messages on the Storefront server.
After involving Citrix support they recommended that we disable AppFlow for the access gateway (since this deployment used Netscaler Insight to monitor ICA sessions) then suddenly things started to work again.
Now I knew that I’ve seen this issue before somewhere on twitter, a quick tweet discovered that someone else has seen the issue as well.
So apparently using Appflow with session reliability is a NO-GO!
If someone has managed to test this with 10.5 please give me some feedback if this has been fixed!
So there has been some fuzz regarding the latest release of Netscaler 10.5 (also codename Tagma) which should been the death of Java GUI within Netscaler. Not quite there yet..
So what has Citrix improved / added in this feature ? Well it is quite a lot. Citrix states that they have added over 100 new features in this release. Beta 1 has just been released to partners, and beta 2 is on its way which is coming mid may.
In the later betaes which are coming there are coming more templates to App and load balancing. But let us focus on the news that’s arrived now.
- HTML5 based GUI
- NITRO SDK for Python
- NITRO for File Operations
- NITRO for ZebOS system
- GSLB Static proximity sync
- SSL configuration Profiles
- CNAME record caching
- Multiple Port CS
- AAA Session Stickiness
- Kerberos Performance
- Jumbo Frames
- Link Redundancy
- TCP BIC and CUBIC
- SPDYv3 Gateway
- SDX Manageability
- Front End Optimization
- Insight Center Enhancements
First of is the GUI which is now mostly pure HTML 5 which makes it quite snappy! I would say that about 80% of the GUI is now HTML 5, some features such as running trace still uses Java (Im guessing this is something that is going to get fixed in a later release.
So what is new under licensing part ? We can see that there are some new features which appear here, such as Integrated Disk caching and RISE (Which is part of the Cisco platform)
There is also two new “features” within Traffic optimization
* Front End Optimization (Which converts data which is being sent back to the user, such as convert image files)
And we have content accelerator (Which is used for integration with Citrix ByteMobile)
Also setting up a new Netscaler Gateway is also alot easier since we don’t need the Java part anymore.
Also support for LLDP is here, which is a information exchange protocol kinda like CDP (from Cisco) So here is a comparison between the old GUI and the new GUI
There is also a list of new monitors which are built-in
Also support for LACP on interfaces which allows you to team NICs.
Citrix has also added some basic wizards which allow for easier setup against XenDesktop / Sharefile and such.
Also SSL profiles and DTLS profiles
We also have support for Jumbo Frames which allows for up to 9000 bytes of payload instead of 1500.
And one thing that is missing is Edgesight monitoring from Netscaler which looks like it has been removed for good. One thing which I didn’t find in the beta but is mentioned in the video is support for Oracle (which most likely coming in a later beta) o this is just my findings in the latest Beta. ill update when the next beta is coming! Looks like we have much to look forward to!
Well the title might be a bit misleading but its true! For those who haven’t seen the news yet, you can read about it here –> http://www.dell.com/learn/us/en/uscorp1/press-releases/2014-06-24-dell-software-defined-storage-portfolio
Now a little background story here, Nutanix has been in the market with its hardware based appliances in about a year/two years or so now. The appliances are running the Nutanix OS and where the magic happens. Now I’ve talked to some customers who swear to having hardware from the regular hardware provides since its familiar and you know how to handle the warranty/support/upgrades and so on, and therefore Nutanix might not be a good match for them.
Now as of today, Dell and Nutanix announced a strategic partnership which will allow Dell to ship their Dell XC Series of Web-scale Converged Appliances, which combine compute, storage and networking and with the Nutanix software on the top which allows for the best of both worlds.
For those that have read the Gartner report which was recently published “Magic Quadrant for Integrated Systems” both Dell and Nutanix are listed in the quadrant and have the focus areas and with this new partnership they will fullfill each other “weak” spots and both come out stronger from this.
So make sure to follow this space going forward!
So as of today, Azure Active Directory Premium is available in trial for all users. For those that aren’t aware of what Azure Active Directory Premium is in short Identity and Access Management for the cloud so its a extension of the previous features which include,
* custom domains
* users and groups
* directory integration with local Active Directory
* MFA (which I have blogged about previously http://bit.ly/1lkQ0NO)
The premium part allows for single-sign and multi-factor authentication to any cloud application. To show the entire functionality.
Active Directory Premium edition is a paid offering of Azure AD and includes the following features:
- Company branding – To make the end user experience even better, you can add your company logo and color schemes to your organization’s Sign In and Access Panel pages. Once you’ve added your logo, you also have the option to add localized versions of the logo for different languages and locales. For more information, see Add company branding to your Sign In and Access Panel pages.
- Group-based application access – Use groups to provision users and assign user access in bulk to over 1800 SaaS applications. These groups can either be created solely in the cloud or you can leverage existing groups that have been synced in from your on-premises Active Directory. For more information, see Assign access for a group to a SaaS application.
- Self-service password reset – Azure has always provided self-service password reset for directory administrators. With Azure AD Premium, you can now further reduce helpdesk calls whenever your users forget their password by giving all users in your directory the capability to reset their password using the same sign in experience they have for Office 365. For more information, seeSelf-service password reset for users.
- Self-service group management – Azure AD Premium simplifies day-to-day administration of groups by enabling users to create groups, request access to other groups, delegate group ownership so others can approve requests and maintain their group’s memberships. For more information, see Self-service group management for users.
- Advanced security reports and alerts – Monitor and protect access to your cloud applications by viewing detailed logs showing more advanced anomalies and inconsistent access pattern reports. Advanced reports are machine learning-based and can help you gain new insights to improve access security and respond to potential threats. For more information, see View your access and usage reports.
- Multi-Factor Authentication – Multi-Factor Authentication is now included with Premium and can help you to secure access to on-premises applications (VPN, RADIUS, etc.), Azure, Microsoft Online Services like Office 365 and Dynamics CRM Online, and over 1200 Non-MS Cloud services preintegrated with Azure AD. Simply enable Multi-Factor Authentication for Azure AD identities, and users will be prompted to set up additional verification the next time they sign in. For more information, see Adding Multi-Factor Authentication to Azure Active Directory.
- Forefront Identity Manager (FIM) – Premium comes with the option to grant rights to use a FIM server (and CALs) in your on-premises network to support any combination of Hybrid Identity solutions. This is a great option if you have a variation of on-premises directories and databases that you want to sync directly to Azure AD. There is no limit on the number of FIM servers you can use, however, FIM CALs are granted based on the allocation of an Azure AD premium user license. For more information, see Deploy FIM 2010 R2.
- Enterprise SLA of 99.9% – We guarantee at least 99.9% availability of the Azure Active Directory Premium service. For more information, see Active Directory Premium SLA
- More features coming soon – The following premium features are currently in public preview and will be added soon:
- Password reset with write-back to on-premises directories
- Azure AD Sync bi-directional synchronization
- Azure AD Application Proxy
Now in order to activate premium in your azure account you need to have an existing directory service in place, then you can go into the directory and then create a premium trial
Then you have to activate the trial.
After premium is enabled you have to license users to use the feature. In the trial we are given 100 licenses which we can use.
But note that now we have other panes here as well that we can use to configure the single-sign on experience. Now in an ideal scenario we would have a Active Directory catalog synced and with a public domain which is verified, i’m in vacation mode so therefore im going to show how to use a cloud only user and setup SSO to different cloud applications.
If we go into users we can see all the users which are located in the cloud directory, either they are synced from a local AD or they are a Microsoft account.
So we have some users in place, if we go into Configure pane we have the option to customize the access page which users are using to use SSO to web applications. We also have the option to enable users to do password reset (NOTE: that this requires that users have either a phone or alternative email adress defined) this can also me combined with password write back to on-premises AD. http://msdn.microsoft.com/en-us/library/azure/dn688249.aspx
Now we want to add some SaaS applications for the test, go into applications and choose add.
There are 3 ways to add an application. Either add a an regular web application or a native client application, choosing a application from the gallery (which atm consists of over 1000 different SaaS applications. Or if we want to publish an internal application outside of our network (this uses Microsoft Azure AD Application Proxy)
So in our case we are going to choose applicaiton from the gallery. Now I have already added some applications to the list here, and some appliactions have different capabilities then others. For instance Salesforce application has the capabilities for provisioning users automatically after a dirsync for instance, while twitter or Yammer do not have this capability.
There are also two types of SSO for each applications, we can either use ADFS (federation based SSO) or use Password based SSO.
Important to note that password based SSO is when a user click on a application from the access portal and has a plug-in installed which then populates the username and password field of the application when entering, it also has some requirements.
Configuring password-based single sign-on enables the users in your organization to be automatically signed in to a third-party SaaS application by Windows Azure AD using the user account information from the third-party SaaS application. When you enable this feature, Windows Azure AD collects and securely stores the user account information and the related password.
Password-based SSO relies on a browser extension to securely retrieve the application and user specific information from Windows Azure AD and apply it to the service. Most third-party SaaS applications that are supported by Windows Azure AD support this feature.
For password-based SSO, the end user’s browsers can be:
- IE 8, IE9 and IE10 on Windows 7 or later
- Chrome on Windows 7 or later or MacOS X or later
Now if I again go back to the application list and click on an application I have usually two options. Defining SSO options and choosing who has access.
NOTE: for salesforce I have the ability to configure automatic user provisioning as well.
Now go into assign users and choose an user in the directory. Now when using password based SSO you get the option of entereting the credentials on behaf of the users (now they are also able to enter this information on the access portal)
After this is done and you have assigned users to different applications they can open the access portal (which can be found here –> http://myapps.microsoft.com ) After I login here with my username I am able to SSO to the application I click on from the portal (NOTE that this requires a browser plug-in installed) Microsoft has also already created an wiki containing best-practices for accessing SSO applications.
And voila, I have my personal little password manager. From a user perspective I have the option to change credentials from this portal I can also change my password for my main user (which is a outlook user in this scenario) But this is a huge step in how to manage access to users and applications with a little touch of the cloud.
So its the time of year again and my summer holiday is starting, therefore there will be little news here for the next month. On the other hand, I received news yesterday that I passed my CCI networking assesment and am now able to do Netscaler training courses.
So on the next months ahead, my time is going to be spent training myself on data center networking, vmware (yeah really!) and IT-security.
So until next time!
So today Veeam announced their latest feature in version 8 which is something that i’ve been wanting for some time now, this feature is known as Cloud Connect –> http://go.veeam.com/v8-cloud-connect
The purpose of this feature is that it allows service providers to Office BaaS (Backup As a Service) to customers and allow them to integrate Veeam B&R to a Service Provider and allow them to use the service provider as a remote repostiory.
This requires that a service provider adds Cloud Gateway in their infrastructure and then customers can add them directly from the console in v8
And since this uses SSL it allows for true multi tenant without the use of VPN appliances between the provider and the customers.
Looking really forward to this feature! will be writing more about this when it releases.
In the many releases of Netscaler VPX (Starting with builds after 9.2) have had some minor issues with additional latency when running on VMware.
This has been a known issue for quite some time, and of course there has been a workaround available as well.
NetScaler VPX Appliance
- Issue ID 0326388: In sparse traffic conditions on a NetScaler VPX virtual appliance installed on VMware ESX, some latency might be observed in releases after 9.3 as compared to release 9.2. If this latency is not acceptable, you can change a setting on the appliance. At the shell prompt, type:
Perform a warm reboot for the above change to take effect. To have the new setting automatically applied every time the virtual appliance starts, add the following command to the /nsconfig/nsbefore.sh file:
But! I am happy to say that this has been fixed in the latest build (126.12) so we no longer require to run the commandline to fix the latency issue
Something I’ve been planning to write for a while but with all the stuff happening lately, its hard to keep track. So this is a question that comes by now and then, how does netscaler handle route entries ?
Now a Netscaler often sits between many differnet networks with a leg in DMZ, one in the internal sone and other sones. Some deployments might be two-armed with more network attached to the Netscaler, and some require it to only be using one vlan because of security requirements.
Now what decides which network the Netscaler uses to communicate with the backend servers? Since Netscaler is a L3 device it uses IP and routing tables to determine where to go.
When you are deploying a Netscaler, one of the requirements is to setup a default gateway and a subnet IP. When you add a default gateway a route entry will be added to it automatically. This route entry looks like this
Which essentially says, all traffic which I have no information about will be sent to my default gateway which is 192.168.88.1.
So if my Netscaler sits on the IP 192.168.88.2 with a prefix of / 24 and the Netscaler needs to get in touch with 192.168.89.2, then the Netscaler will go trough the default gateway.
Now also when you add a subnet-IP another route entry is added automatically where the subnet IP itself is listed as a gateway IP for reaching another subnet. This Netscaler has two SNIPs. one in the 192.168.88.0/24 network and another in the 192.168.31.0/24 network
So all traffic destined to the 192.168.31.0 network is tunneled trough the 192.168.31.127 network. Another thing that is these route entries have a prefix of /24. Meaning that the Netscaler can contact 192.168.31.127 if it needs to get in touch with an IP within that range.
Then this means that the Netscaler might have multiple paths to other subnets ? Since my default-gateway might also have access to 31 and the 88 network. Like other layer 3 devices like Cisco looks at the prefix and then decides which is closest to the target. Netscaler operates only at the cost to get to the remote location. (Thanks to Andrew for that)
Now the default gateway route has a cost of 0
But the SNIP’s have a non-existing cost value
Meaning that they are prefered paths. If I was to have multiple SNIP’s which has access to a back-end service it might also get a conflict, this can be resolved using Net-profiles, this allows you to define which source ip adress should be used to connect to the back-end services.
Attach Net-Profile to a service
But what if you are required to use a one-armed deployment ? and need access to several backend networks for the service/probes to work properly.
Then you need to add a new static route which might look like this. This static route entry says the following. “If you need to access the 192.168.89.0/24 network you need to contact 192.168.88.1)
This new route will be listed as a static route and will have the same cost as the default gateway, but since this gateway sits closer to the targets in the 89. network it will be prefered over the default gateway.
So hopefully this clears up some confusion for people out there!
So during TechEd 2014 a couple of weeks ago, Microsoft announced Azure RemoteApp which for my part was the most exiting thing announced as TechEd. The idea behind it is to be able to publish “regular Windows applications” using Microsoft Azure directly to end-users using RDP.
Now with the late release of RDP clients for Android, iOS this allows customers to access their applications in Microsoft Azure using any devices. (Note that the RDP client’s were recently udpated for Android and iOS so take a look for an update)
Now there aren’t any pricing info published related to the serivce since it is currently in beta. But some info is released for
1: customers do not need to pay for bandwidth (going in and out)
2: customers do not need to pay additional licenses for instance RDS cal just to the applications they need published)
3: MIcrosoft Office 2013 will most likely be a part of it
4: Windows Server 2012 R2 is the only supported by Azure RemoteApp meaning that your applications that you want published needs to work on 2012 R2
5: If customers want to add their own applications they need to setup a VPN session in order
6: Each user has 50GB of storage of the remoteapp
Now we can also upload our own template image. There are some requirements here that needs to be in place.
- The template image must be created using Windows Server 2012 R2 with Remote Desktop Session Host and the Desktop Experience feature installed.
- Create a VHD template file. VHDX files aren’t supported.
- Format the VHD as NTFS.
- Don’t include an unattended xml config file in the sysprep image.
- Don’t use VM mode to create a sysprep generalized image.
Now a here is what a RemoteApp service looks like, users will be able to access the service (during the preview) on https://www.remoteapp.windowsazure.com/ after I log in with my user I can start the following Office apps (which are included in the service)
Now the RemoteApp client is running RDP underneath
But RemoteApp is not leveraging UDP but just RD gateway to tunnel the connections to a backend VM
But this is indeed going to be a interesting feature! just needs to be a bit polished and maybe leverage UDP as well and hopefully publishing a pricing calculator for RemoteApp
Now I’ve been working with Veeam for a while now, and I’ve seen thatt mostly the case that when a backup job fails (or a surebackup job fails) or something fails, its most often not Veeam’s fault.
Veeam is a powerful product but it is dependant on alot of external features to function properly in order to do its job right. For instance in order to backup from a Vmware host, you need a vmware license in place in order to allow Veeam to access the Vmware VADP API’s.
If not Veeam can’t backup your virtual machines running on Vmware.
Also in order to do incremental backups properly Veeam is also dependant on CBT working properly on the hypervisor. So the real purpose of this blog post is mostly for my own part, but having a list of problems/errors that I come across in Veeam and what the fix is for it.
Now in most cases, when running jobs the job indicator will give a good pinpoint what the problem is. If not look into the Veeam logs which are located under C:\Programdata\Veeam\Logs (Programdata is a hidden folder) there is also possible to generate support logs directly from the Veeam console –> http://www.veeam.com/kb1832
Issue nr 1# Cannot use CBT when running backup jobs
Cannot use CBT: Soap fault. A specified parameter was not correct. . deviceKeyDetail: ‘<InvalidArgumentFault xmlns=»urn:internalvim25″ xsi:type=»InvalidArgument»><invalidProperty>deviceKey</invalidProperty></InvalidArgumentFault>’, endpoint: »
If CBT is for some reason not available and it not being used, Veeam has its own filter which it uses in these cases. Veeam will then process the entire VM and then on its own compare the block of the VM and the backup and see which blocks have changed, and the copy only the changed blocks to the repository. This makes processing time alooooot longer. Now in order to fix this you need to reset CBT on the guest VM. This can be done by following the instructions here –> http://www.veeam.com/kb1113 and one for Hyper-V CBT http://www.veeam.com/kb1881
Issue nr 2# Sure backup jobs fail with error code 10061 when running applications tests. This is most likey when a firewall is configured on the guest VM which only allows specific VMs. I have also seen this when a guestVM is a restarting state. If you do not have a guestVM firewall active, doing a restart of the guestVM and then do a new backup should allow the surebackup job to run successfully.
Issue nr 3# WAN accelerator failes to install. This might happen if a previous Veeam install has failed on a server. When you try to install the WAN accelerator the setup just stops without no parent reason. Something makes the installpath of the WAN cache folder to the wrong drive. You need to go into the registry of the VM and change the default paths as seen here –> http://www.veeam.com/kb1828
Issue nr 4# Backup of GuestVMs running on a hyper-v server with Windows Server 2012 R2 update 1, this is a known issue from Microsoft which requires an update from Microsoft –> http://www.veeam.com/kb1863
Issue nr 5# Application-aware image processing skipped on Microsoft Hyper-V server, this is of course related to many possible features. In most cases it is integration services, a list of the different causes and solutions are listed here –> http://www.veeam.com/kb1855
Issue nr 6# Logs not getting truncated on Exchange/SQL guest VMs, this requires application aware image processing and define that the backup job should truncate logs –> http://www.veeam.com/kb1878
Issue nr 7# Backup of vCenter servers –> http://www.veeam.com/kb1051
Issue nr 8# Backup using Hyper-V and Dell Equallogic VSS –> http://www.veeam.com/kb1844
Issue nr 9# Incredible slow backup over the network and no load on the servers, make sure that all network switches are full-duplex.
Issue nr 10# Win32 error: the network path was not found. When doing application aware image processing veeam needs to access the VM using the admin share with the credentials that are defined in the backup job. (For Vmware if the VM does not have network access Vmware VIX is used) It is possible to change the priority of these protocols –> http://www.veeam.com/kb1230