Most these days are including some sort of Powershell module for their software, why? becuase it opens up for automation !
As I mentioned in one of my previous posts I’ve started playing around with PernixData in our lab enviroment and I was reading around on the different KB articles when I saw that “Hey! Pernix has a PowerShell module installed on the management server”
So getting it up and running, the easiest way to take a look around on the different cmdlets is by using ISE on the management server.
Or if you can to do e remote call to it using SMA or Orchestrator you can import the module by adding the path to C:\Windows\system32\WindowsPowerShell\v1.0\Modules\PrnxCli\PrnxCli.dll
Or else its just import-module PrnxCLI
So what can we do ? We can pretty much do anything (I haven’t completly found out how to create a cluster yet, since the cmdlet seems to want some sort of GUID which I am not able to grasp yet) but still I can define resources and add a VM to be accelerated and so on.
So what kind of cmdlets can I use. First of I need to connec to the managemnet server, using a account which has rights to the managment server.
Connect-PrnxServer –Password password –UserName username -NameOrIPAddress localhost
Next I can assign resources to my cluster called pernix (from a particular host) I can also use multiple variables here –host 1,2,3 and so on.
Add-PrnxRAMToFVPCluster -FVPCluster pernix -Host esxi30 -SizeGB 12
Next I assign a virtual machine to the Cluster (Name indicated name of the virtual machine) then the amount of write back peers (If I choose writeback)
Add-PrnxVirtualMachineToFVPCluster -FVPCluster pernix -Name felles-sf -NumWBPeers 1 –WriteBack
Now I can see the acceleration policy being assigned to the virtual machine. Do a format list will give a bit more information about the policy.
Get-PrnxAccelerationPolicy -Name felles-sf | format-list
Now I can also do more showing a stats directly from Powershell, this will list out the stats for the virtual machine the last 5 samples in a grid-view
Get-PrnxObjectStats -ObjectIdentifier felles-sf -NumberOfSamples 5 | Out-GridView
Works like a charm!
So I was helping a partner the other day with a AAA-setup. The issue was that a customer had two different AAA authentication profiles where one profile was using username + password while the other was using two-factor authentication with RADIUS. So what they wanted is that the users were first given access to the first intranet site and then needed to reauthenticate if they needed to get to the second sone with their two-factor authentication
Problem was that this didn’t work as they intended. If a user first was logged in with their username and password they were also given access to the other content that required two-factor authentication.
This is because within Netscaler there is concept called Authentication Level
So if you have two differnet authentication profiles which both of them have a authentication level of 1. I woulnd’t matter because all users would then be able to access all content. So what we needed to do was to assign a authentication level of 1 to the authentication policy using just username and password. Then we assigned a authentication level 2 on the one using two-factor. This is because that a user that is authenticated in granted level 1 cannot access content that requires an authentication level that is higher. So when the users tried to enter the content that needed two-factor authentication they needed to reauthenticate.
So this has been on my todo list for a while, but I’ve actually been able to play around with Pernix data. So what does it do ?
It gives us low latency read and writes based upon using server side flash or RAM. So think of it as a storage tier in between the virtual machine and the datastore. Or even a better way using Write Through (Read) and Write Back (Read & Write) caching. (Note when setting up write back caching you have an option to define replication partners so you don’t lose data.
So this allows for improved performance while still offloading burst IO from the underlaying datastore (which might be NFS, iSCSI, FC and so on) still its only suppoted on Vmware. The golden part (besides the flash part) is this a pure software solution, they say on their website you use about 10 min to set it up, thats just wrong I used about 7 min max! . So there are two pieces to it to get it up and running (Or 3 actually) first is the host integration software which is basically a vib that needs to be installed on each host. I basically did it using SSH and FTP, of course it is possible to use VUM as well.
esxcli software vib install -d <ZIP file name with full path> —no-sig-check
Next we need to install a management server, which needs to a Windows server with a SQL database to store data. Important to remember that Pernixdata stores about 0,5 MB data pr VM each day so size accordingly.
AFter the management server is installed you have to add the plugin to the vCenter, (I like the C# client since I was using 5.5) so when I started vCenter go into Plug-ins –> Manage Plug-ins
After the plugins were enabled and active I was able to login to the management console ( remember thou that you need an Vmware cluster for the management to work)
To give myself an easy start I want to try out the RAM based FVP cluster on one of the hosts and give it a spin.
So I created a RAM based cluster on one of the hosts (Choose Create Cluster and then add the type you resource you want flash or RAM), so you can decide yourself how much RAM you want allocated to the FVP cluster. (And no you don’t need 40 GB assigned to a FVP cluster, just about be playing around) well get to that part in a bit.
Then I choose to enable write-back (which means that the content in the FVP is not directly in sync in what’s on the datastore. Meaning that if my server happend to go down that data would be lost since it is stored in RAM, but again it does give a good write boost since the FVP doesn’t need to wait on the datastore. So I did a quick test before and after adding my virtual machine to the FVP cluster (and without any further tuning, just adding the VM to the cluster) so what is happening underneath is that Pernix becomes a part of the hypervisor kinda like a filter which the VM has to go trough when reading and writing to the datastore.
HD tune test without Pernix enabled
Then I activated the VM for write back and added it to the cluster (and yes you can do this on the fly) and note that this VM is stored on NFS storage with SAS drives.
So how where my test results now ?
Read improved with 20x (well close to it) This test does not show much write information. Lets try some random access (With FVP enabled)
(with FVP disabled)
Now we can see from the graphs inside vCenter that content is being served from the FVP content. These tests only show a fraction of it and of course would be much more visible in a production SQL or SharePoint for instance. So stay tuned for more!
So this is something I’ve just recently gotten aware of. (Just comes to show that so much news is coming to EMS pack. But this feature is really usefull for customers which has Office365 and Azure Active Directory.
Which might be SaaS applications, other Microsoft Azure AD based applications or on-prem apps using Application proxy. That great but the typical user might have Office365 as their start portal (start page) is there any way to show the apps there instead ? indeed!
So inside the application portal of Office365 you have a option here called myapps
If users click here they will get the apps available to them
And here I can choose to attach the application to the application portal for a user.
Great stuff! since this allows the user to have access to all their applications from within Office365.
Finally its here! the ability to remote custom remoteapp images in Microsoft Azure. Before this we had a long process of creating a custom VM locally and sysprepping it and running a powershell command to upload the VHD file containing all our LOB to Azure. Those days are over!
Instead we can use this method to create remoteapp images. Setup a new virtual machine in Azure, choose from Gallery and there choose the “Windows Server Remote Desktop Session Host” VM this is the one that we use to create our Image.
Then we provisoing the VM (Note this is automatically setup as an A3 because of the instance size on RemoteApp) Next we can install our applications that we need.
Next we run the ValidateRemoteApp image PowerShell script on the desktop (This will go trough all the prerequisites to setup the image.
Then do a sysprep and generalize
Then do a capture of the virtual machine so it is stored in the virtual machine library
Then we go into RemoteApp, templates and choose Import an image from your virtual machine library.
And we are good to go!
In the last couple of years now, Microsoft has been working actively with new features in Azure Active Directory. For those who aren’t aware of what that is I can tell you briefly. It is identity as a service hosted in Azure (Its not the same as regular Active Directory even thou it shares the same name, but it is a user administration system and stores users in a catalog but it is built for the cloud. You also don’t have features like Group Policy and the notion of Machine objects are not present (well almost not) ill come back to that.
So when you set a Intune account, Office365 account or CRM online it will automatically create a Azure Active Directory tenant. All users that are created will be populated into that Azure AD tenant. From an administrator point-of-view all they will see is the users listed in their administration portal. In order to get full benefit of Azure Active Directory you need to go into Azure.
(Before I go into specifics you need to be aware of that there are 3 edtions of Azure Active Directory, free, basic and Premium) You can see the different features that are included in all 3 here –>
And also take note that Premium is also included in Microsoft EMS package (With Azure Rights Management and Intune) https://msdn.microsoft.com/en-us/library/azure/dn532272.aspx
So what do I mean that its built for the cloud ? well first of regular Active Directory which today is well established and one of the key important features of an on-premise setup does not work well with all the SaaS services that are being added to many enterprises today. Now many vendors include Active Directory integration in their Service (like Dropbox and such) but this is because that there are no native features in Active Directory.
Azure Active Directory on the other hand is built to be a platform which can include all the applications you want and work as an identity provider for all your SaaS applications or be on-premise. Now many are familiar with the syncronization tools that Microsoft offer to give a consistent user experience between on-prem and Office365. These tools will place users in Azure Active Directory tenant and will then allow us to build upon with new features and add integrations with other SaaS applications. We can also use Azure Active Directory standalone if we want a more pure cloud based setup.
So what does Azure Active Directory consist of ?
- Azure Access Control
- Azure Authentication System (SAML, OpenID & Oauth, WS-federation)
- Azure Graph
- Azure Rights Management Service
- Azure Multi-factor authentication
So all these services have a set of sub-features as well, but with all this Azure Active Directory can be a platform for managing identity across different clouds. So what might it look like ? Let’s think of a traditional enterprise where the HR application is where all new employees might be generated, the IT needs to setup a Active directory user and then he would need to provisiong access to all SaaS apps that the company uses.
What would it look like with Azure Active Directory setup with the different tools that Microsoft offers ?
Lets look at the example again, a new employee is setup in the HR system. Microsoft Identity Mnager(which is vNExt of Forefront) has a connector which allows it to grab hold of the information and has a workflow of how new employees should be setup and provisions a user in the local Active Directory. Azure AD Connect (Which is the new and upcoming Dirsync and AAD sync) will based upon the filters sync the user to Azure Active Directory. There can also be an ADFS which allows for true SSO since then ADFS will work as an SAML iDP and users can access it in real-time, another option is the setup user syncronization with password hash, this allows for users to use their username and passwords (a bit delayed when a password has been changed and a sync has not been run) but it does not give users a true SSO to services in Azure.
Now that the users are in Azure we can setup access to other SaaS services like SalesForce, Dropbox, other Social media applications and maybe even Citrix. Another option is to setup an internal application which we want to publish. This requires another feature called application proxy which will allow the users to authenticate users their Azure AD credentials (with or without MFA) then proxy a connection to a on-prem service)
So far I’ve covered some of the basics. Lets look how it looks like. this is a screenshot from my management portal here I have one catalog
Inside here I have multiple users, some are cloud only and some are synced from on-premise. Here I also have option to manage MFA for my tenant ( I have a valid subscription)
Also inside the tenant catalog I have a bunch of different options which we are going to go trough.
First of lets look at the configuration part. First of is the part to customize sign-in experience for our users.
So we can define background logo and background screen and such. Just basic stuff so when users try to login they might see this.
We also have configuration options for users password reset
We can also define a password write back feature (Which allows new passwords generated in Azure AD be written back to an on-premise Active Directory. Note that this requires Active Directory sync services be setup with write back feature.
As I mentioned earlier was that Azure AD has no idea about machine objects, well they kinda do. This is another preview feature but it allows for Windows 10 machines to “join” Azure Active Directory and allow for user login using their Azure AD credentials
(From a Window 10 tech preview machine)
After joining the Azure AD domain you can now sign it with your credentials
There are also alot of different options regarding Group Managment in Azure
And one important part is Application Proxy
I have blogged about this before (https://msandbu.wordpress.com/2015/02/19/publishing-internal-applications-using-azure-active-directory-using-application-proxy/)
So let’s talk abit about the important part.. The Applications. Now Azure has some possibilities when adding applications. Work as an front-end authentication feature for instance on-prem applications. single-sign on for web based applications (password and federated SSO) and setting up MFA.
So let’s start with adding Facebook for our tenant and seting up the new feature called password roll-over (Which allows Azure AD to automatically update a password on behalf of the user)
So head on over to applications and choose add from Gallery
Find Facebook from the list and choose OK.
Click on Configure Single sign-on and choose Password SSO (Note that this requires that a user authenticate first with a username and password using a browser which has Azure AD extension installed. So when the user authenticated the extension will take the username and password, encrypt it and store it in the Azure AD tenant, so next time the users logs inn they don’t need to enter a username and password.
Then lets assign some users. Go into users and groups and find a users and choose Assign
Now we can also enter a username and password on behalf of the user
(Note that for Linkedin, Twitter and Facebook) we have the preview feature automatic password rollover)
Then click OK.
Now let’s add an on-prem application, now as I’ve blogged about it before it won’t show what the steps are but just to show what’s new.
For on-premise applications we can configure access rules, let’s for instance say that all users (except for sales users) need to use MFA when accessing this application outside of the Office.
Note that this is based upon IP whitelisting to allow who needs to access with or without MFA. Now this is part of the cloud based MFA feature, it is also possible to download a server component MFA which you can attach to your on-prem services as well using traditional AD https://msandbu.wordpress.com/2014/05/05/azure-multifactor-authentication-and-netscaler-aaa-vserver/
Now note that you can also use Azure Active Directory as an SAML iDP and use Graph API when developing other applications and setup integration with it. Now there are also some applications like Salesforce which offer full identity management
true SSO and provisioning.
But this is only a few vendors which has added this support. Now if we are approaching a enterprise with “Hey you should get Azure AD, its great stuff!” and they have like 200 applications SaaS based which they use how can you get the overview ? Microsoft has also created something called Cloud App discovery (Which also is in preview –> https://appdiscovery.azure.com/)
Which is basically an agent that you download and run in your infrastructure it will gather info and find out what applications are being using and try to map them aganst those that Microsoft has support for.
So when you have setup the applications and given users access how does it look like ?
and voila user access!
Now this was just a brief touch into Azure Active Directory. In the last 6 months these features has been added to Azure AD
•Dynamick Group membership
•Azure AD Connect Health
•200+ applications in the gallery list
•SaaS provisioning attributes
•MIM in Public Preview
•Azure AD Proxy
•Azure AD on iOS and Android
•Conditional Access pr App
And this list will continue to grow, if you want to see what’s happning on Azure AD I suggest you follow Alex Simons (@Alex_a_simons) on twitter (He is the Product Manager for Azure AD, and from the looks of it from the feature list, he is feeding his developers Red Bull or something stronger)
and follow the Azure AD blog http://blogs.technet.com/b/ad/
Stay tuned for more news about Azure AD
Something caught my eye earlier today that I wasn’t aware of. With Citrix Reciever 4.2, Citrix introduced support for Audio over UDP with Netscaler Gateway. Now this is huge since ICA proxy has always been TCP but now it adds support for Audio over UDP which gives it a much better performance since it does’nt have the required overhead that TCP does.
So checking out Citrix edocs I didn’nt find much info. All I noticed was the information in the release notes of Citrix Receiver. Then out of the blue comes this blogpost –> http://discussions.citrix.com/topic/361759-udp-audio-through-netscaler-with-dtls/
Which basically states in order to setup audio over Netscaler Gateway using UDP (DTLS) we need to define Citirx Receiver Policies
Then we need to enable DTLS on the Netscaler Gateway (Which now is supported on the e-builds)
Then we are all set. You can use the HDX monitor insider a ICA-session to see that audio over UDP is enabled.
This is an issue I have seen a couple of times now, therefore I decided to write a blogpost about it. In january I got some issues with out test servers running Office365 and Shared Computer support that the credential tokens where not working and users needed to reauthenticate when opening another Office application.
Now I have also gotten a couple of questions on email and some on the Microsoft forum asking about the same.
Now I did a bit of troubleshooting and didn’t figure out what the issue was right away, but this feature had been working for quite some time therefore It must have been an update that was the issue and since Office365 is Click-to-run which is updated by Microsoft it must have been a new build that makes this happen.
Therefore I used the Group policy templates that comes with Office365 (Which can be downloaded here –> http://www.microsoft.com/en-us/download/details.aspx?id=35554)
(Here are the version builds) http://support2.microsoft.com/gp/office-2013-365-update
And specified which build to use, then I choose the November build and Shared computer support worked as intended again. Therefore it seems like there is an bug/issue on the December and february build.
Today Veeam announced a RC of Veeam B&R Patch 2 RC and Veeam Endpoint Backup which now allows us to integrate with a Veeam repository
NOTE: This is RC not intended for Production, but please give it a try and give Veeam some feedback on this great product!
You can fetch the RC releases here –> http://forums.veeam.com/veeam-endpoint-backup-f33/veeam-endpoint-backup-free-rc-t26694.html#p139052
One can never be to careful, so don’t install this in production… Or you can’t
So the setup is pretty simple, install the endpoint backup product on a server, define a backup mode
then choose backup repository!
Define a username and password which has access to the backup repos
Then let the magic fly!!!
to be continued!
Citrix just recently announced the tech preview of the latest Storefront X1 (and Reeiver for web X1) so is this where we finally can get back the web interface features that have been missing for some time?
So what’s new ? As you can see after a installation the management is still the same
Just have some new features available. We can now customize website apperance (Add logos)
We can define what type of Website type we want (Classis is the regular Green bubble Storefront)
We can also add shortcuts to website (Why not resources!!)
We can also create featured App groups which can be department specific. And we can use keywords in applications to differenciate the applications.
Now the new website GUI has had a new overhaul
But it still has much of the same CSS properties available like with regular StoreFront, now the new Receiver X1 web is located under inetpub\wwwroot\Citrix\Websitename\receiver
And desktops and appliations look alot nicer as well
Looking forward to the new Citrix Receiver as well! happy customizations!