New stuff for Intune

So Microsoft has been busy coming with numerous updates to Intune lately. The latest updates came last week, you can see updates here –> http://blogs.technet.com/b/microsoftintune/archive/2014/12/09/new-mobile-application-management-capabilities-coming-to-microsoft-intune-this-week.aspx

  • Ability to restrict access to Exchange Online email based upon device enrollment and compliance policies
  • Management of Office mobile apps (Word, Excel, PowerPoint) for iOS devices, including ability to restrict actions such as copy, cut, and paste outside of the managed app ecosystem
  • Ability to extend application protection to existing line-of-business apps using the Intune App  Wrapping Tool for iOS
  • Managed Browser app for Android devices that controls actions that users can perform, including allow/deny access to specific websites. Managed Browser app for iOS devices currently pending store approval
  • PDF Viewer, AV Player, and Image Viewer apps for Android devices that help users securely view corporate content
  • Bulk enrollment of iOS devices using Apple Configurator
  • Ability to create configuration files using Apple Configurator and import these files into Intune to set custom iOS policies
  • Lockdown of Windows Phone 8.1 devices with Assigned Access mode using OMA-URI settings
  • Ability to set additional policies on Windows Phone 8.1 devices using OMA-URI settings

Now one of the cool features is the managed browser app. This allow us to manage how content is opened and displayed from this app. By default the application can do two things.

  • Allow the managed browser to open only the URLs listed below – Specify a list of URLs that the managed browser can open.
  • Block the managed browser from opening the URLs listed below – Specify a list of URLs that the managed browser will be blocked from opening.

So we define a URL which a user can open (NOTE: You can see what kind of policy prefix I can use here –> http://technet.microsoft.com/en-us/library/dn878029.aspx#BKMK_URLs)

The application itself is available from Google Play https://play.google.com/store/apps/details?id=com.microsoft.intune.managedbrowser but in order to use it in conjuction with Intune policies we need to deploy the application from Intune itself. Besides the managed browser application, Microsoft also released some other applications like Intune PDF viewer, Intune AV player, Intune Image player which users can download from google play. So when a user uses the managed browser, opens a PDF link from the managed browser, it will automatically open in the Intune PDF viewer (Where we can define settings like cannot copy/paste screenshot etc.

So when we setup this we need to deploy the package to our users, so they can install it from the company portal. NOTE: Dont deploy it right away we need to create some policies first.

image

So when setting up policies we have a lot of new policy features we can define for our devices.

image

Now the Managed Browser Policy is just the allow/deny list. And we have the mobile application management policy, here we can define how the apps are going to integrate and what users can do when the content is displayed.

image

When we are done creating the policies, we can deploy these policies. Now unlike other policies these need to be deployed as a part of the software and not directly to users or groups. So when setting up the browser deployment we can add the policies.

image

Now we can head on over to the mobile device! First of I need to sync my mobile device policy

Screenshot_2014-12-18-23-24-22

Then I install the managed browser app and other compents I need from the company portal

Screenshot_2014-12-18-23-05-19

Now I am ready to use managed browser. When I open a URL that is on the deny list I get this error message.

Screenshot_2014-12-18-23-27-15

When I open a URL that is on the allow it works like a regular browser, but when I download a PDF file you can see there is a loading bar underneath the URL this is because the managed browser downloads the PDF internally in the App and then

Screenshot_2014-12-18-23-27-41

we are switched over to the Intune PDF viewer

Screenshot_2014-12-18-23-28-45

so again, alot of new stuff arriving to Intune, looking forward to the next chapter

Problems with Netscaler and Hyper-V NIC teaming ICA error 1110

Had a customer case where they had troubles with ICA sessions being terminated when connecting via Netscaler. They had a regular MPX pair setup in HA which then serviced XenApp servers which were located on a cluster of Hyper-V hosts. These hosts were running Windows Server NIC teaming switch independent Dynamic mode.

The Citrix sessions were terminated with an failed status of 1110. What they also noticed is that when the Netscaler were trying to connect to the XenApp host they used the MAC adress of the XenApp virtual machine, when the traffic was to return to the Netsacler, the MAC adress changed from the XenApp host to the MAC adress of the Hyper-V host.

This makes the Netscaler drop the traffic and the ICA session was terminated.

From the deployment guide of NIC teaming (http://www.microsoft.com/en-us/download/details.aspx?id=30160)

Dynamic mode has this as a “side effect”

3.11    MAC address use and management
In switch independent / address hash configuration the team will use the MAC address of the primary team member (one selected from the initial set of team members) on outbound traffic.  MAC addresses get used differently depending on the configuration and load distribution algorithm selected.  This can, in unusual circumstances, cause a MAC address conflict.

This is because is the world of Ethernet, an endpoint can only have one MAC-adress and with the use of switch-independant there can only be one physical adapter that is active that allows inbound traffic.

Therefore If you are having issues with Netscaler and Hyper-V NIC teaming you should change from Dynamic to Hyper-V port nic teaming because then the NIC teaming will not do any source MAC adress replacement.

 

image

But note that Hyper-V distribution load balancing has its own issues, which you can read about in the LB document.

Windows Azure and Storage performance

For those that have been working with Azure for some time there are some challenges with delivering enough performance for a certion application or workload.
For those that are not aware Microsoft has put limits on max IOPS on disks which are attached to a virtual machine in Azure.  But note these are max limits and not a guarantee that you get 500 IOPS for each data disk.

Virtual Machine instance
Basic ( 300 IOPS) (8 KB)
Standard ( 500 IOPS / 60 MBPS) (8 KB)

There is also a cap for a storage account for 20,000 IOPS.

In order to go “past” the limit, Microosft has mentioned from time to time to use Storage Spaces (which is basically a software RAID solution which was introduced with Server 2012) in order to spread the IO load between different data disks. (Which is a supported solution)

http://blogs.msdn.com/b/dfurman/archive/2014/04/27/using-storage-spaces-on-an-azure-vm-cluster-for-sql-server-storage.aspx “physical disks use Azure Blob storage, which has certain performance limitations. However, creating a storage space on top of a striped set of such physical disks lets you work around these limitations to some extent.”

Therefore I decided to do a test using a A4 virtual machine with 14 added data disks and create a software pool with striped volume and see how it performed. NOTE that this setup was using regular storage spaces setup which by default uses a chuck size of 256 KB blocks and column size of 8 disks.http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#What_are_columns_and_how_does_Storage_Spaces_decide_how_many_to_use

I setup all disks in a single pool and created a simple striped volume to spread the IO workload across the disks (not recommended for redudancy!) and note that these tests were done using West Europe datacenter. And when I created the virtual disk I needed to define max amount of columns across disks.

Get-storagepool -FriendlyName test | New-VirtualDisk -FriendlyName «test» -ResiliencySettingName simple -UseMaximumSize -NumberOfColumns 14

Also I did not set any read/write cache on the data disks. Now I used HD tune pro since I delivers a nice GUI chart as well as IOPS.

For comparison this is my local machine with an SSD drive (READ) using Random Access testing.

image

This is from the Storage space (simple virtual disk across 14 columns with 256 chucks) (READ)

image

This is from the D: drive in Azure (note that this is not a D-instance with SSD)

image

This is from the C: drive in Azure (which by default has caching enabled for read/write)

image

Then when doing a regular benchmarking test with Writing a 500 MB file to the virtual volume on the disk.

image

Then against the D drive

image

Then for C:\ which has read/write cache activated I get some spikes because of the cache.

image

This is for a regular data disk which IS not in a storage pool. (I just deleted the pool and used one of the disks there)

This a regular benchmark test.

image

Random Access test

image

Now in conclusion we can see that a storage space setup in Azure is by few precentage faster then a single data disk in terms of performance. The problem with using Storage Spaces in Azure is the access time / latency that these disks have and therefore they become a bottleneck when setting up a storage pool.

Now luckily, Microsoft is coming with a Premium storage account which has up to 5000 IOPS pr data disk which is more like regular SSD performance which should make Azure a more viable solution to deliver application that are more IO intensiv.

Can I run my workload in Azure?

This is a typical question I get quite often, and therefore I decided to write a blogpost to get all the facts straight and talk a bit about the cons about running workloads in Azure and why it some cases is not possible or not the best option. But note there are many use cases for when to use Azure, but I just want to get people aware of some of the different limitations that are there.

So first of, what is Azure running on? the entire Azure platform is running on top of Windows Servers with a modified Hyper-V 2012 installed, and also since Azure is a PaaS/IaaS platform, Microsoft managed all of the hardware and hypervisor layer.  Azure was built on 2008 R2 originally and now 2012 it still only supports VHD disks.

Virtual machines sizes in Azure are predefined, meaning that I cannot size a VM based on what I need, I can only use what Microsoft has predefined (Which you can see here –> http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx ) and also depending on the VM instance size we have predefined how many data disks we can attach to the VMs.

(FOR INSTANCE A4 HAS A MAXIMUM AMOUNT OF 16 DATA DISKS (1 TB EACH) meaning that we get total of 16 TB storage space to a virtual machine. This is of course with the use of storage spaces)

Microosft Azure at the moment supports 3 types of Windows Server OS types (2008 R2, 2012 and 2012 R2)

Microsoft also has a list of supported Windows Server workloads in Microosft Azure –> http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx

And also note that Azure VM’s are mostly using AMD Opteron based CPUs as well, which has a lower performance then regular Intel XEON based CPUes.

So what issues have I seen with Azure when designing workloads

1: Laws and regulations (for instance in Norway, we have alot of strict rules about storing data in the cloud, so be sure to verify if the type of data you are storing can be placed outside of contry.

2: Need for speed (Azure is using JBOD disks and has a cap of 500 IOPS pr data disk, if you need more disk speed we need to use Storage Spaces and deploy different raid sets. But there is just a theoritical speed up to 8000 IOPS. (Which is about a SSD speed) also note that this is AMB based CPUs, if you have a application that really CPU intensiv you might see that Azure is not adequate. NOTE: that Azure is coming with a new C-class which is built up with Premium Storage, which has over 30,000 IOPS and is coming with a Intel based CPU.

3: Graphic accelerated applications/VMs (Since we cannot do changes to the hardware, and Microsoft does not have a option for choosing instances with hardware graphics installed, this is still an option that requires on-premise hardware

4: Unsupported products (If we for instance want to run Lync or Exchange Server, these are not products which are supported running in Azure and therefore require an on-premise solution)

5: Specific hardware requirements (If you have a VM or application that requires some specific hardware using com-ports or USB attachements)

6: Requires near zero downtime on VM level (fault tolerance) Azure has a SLA for virtual machines, but this requires that we have two or more VMs in an availaibilty set this will allow Azure to take down one instance at a time when doing maintance or fault happening in a physical rack/switch/power. There are no SLA for single virtual machines instances, and when maintance is happening, Microsoft will shut down your VM there are no live-migrations in place.

7: Very network instensiv workloads (Again there are bandwidth limits on the different virtual machine instances, and if you also require a S2S VPN between there are also a cap on 80 MBps on the regular gateway. There is also express route option which allows for a dedicated MPLS VPN to a azure datacentre. And also important to remember is the latency between your users and azure datacentres. This can give you a quick indication of high the latency is between you and Azure –> http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx

8: Applications that require shared storage in the back-end( for instance clustering in many cases require shared backend storage which is not possible in Azure since it is bound to a virtual machine as a disk)

9: Stuff that comes in finished appliances (unless the partner has their product listed in Azure marketplace)

10: requires IPv4 and IPv6 (IPv6 is not supported in Azure)

11: Network based solutions (IPS/IDS) (Since we are not allowed to deploy or change the vSwitch which our solution runs in Azure we are not able to for instance set up RSPAN which allows us to use IDS technology in Azure)

These are some use cases where Azure might not be a good fit, I will try to update this list with more, if anyone has any comments or things they want me to add please commect or send me an email msandbuATgmail.com

Upcoming events and stuff

There’s alot happening lately and therefore there has been a bit quiet here on this blog. But to give a quick update on what’s happening!

In february I just recently got confirmation that I am presenting two session at NIC conference (Which is the largest IT event for IT-pros in scandinavia) (nicconf.com) Here I will be presenting 2 (maybe 3) sessions.

* Setting up and deploying Microsoft Azure RemoteApp
* Delivering high-end graphics using Citrix, Microsoft and VMware

One session will be primarly focused on Microsoft Azure RemoteApp where I will be showing how to setup RemoteApp in both Cloud and Hybrid and talk a little bit about what kind of use cases it has. The second session will focus on delivering high-end graphics and 3d applications using RemoteFX (using vNext Windows Server), HDX and PCoIP and talk and demo abit about how it works, pros and cons, VDI or RDS and endpoints so my main objective is to talk about how to deliver applications and desktops from cloud and on-premise…

And on the other end, I have just signed a contract with Packt Publishing to write another book on Netscaler, “Mastering Netscaler VPX” which will be kind of a follow up of my existing book http://www.amazon.co.uk/Implementing-Netscaler-Vpx-Marius-Sandbu/dp/178217267X/ref=sr_1_1?ie=UTF8&qid=1417546291&sr=8-1&keywords=netscaler

Which will focus more in depth of the different subjects and focused on 10.5 features as well.

I am also involved with a community project I started, which is a free eBook about Microsoft Azure IaaS where I have some very skilled norwegians with me to write this subject. Takes some time since Microsoft is always adding new content there which needs to be added to the eBook as well.

So alot is happening! more blogsposts coming around Azure and Cloudbridge.

Veeam Endpoint backup – a new free backup solution for computers and physical servers

Veeam has announced a couple of months back, Veeam Endpoint backup. This is a free solution for all which works for both physical servers and for regular Windows Computers.

Now Since Veeam has always been working for a plain virtualization stack, we need to be aware that there are alot of differences between taking a backup in a virtual enviroment and taking backup of a phyiscal server/computer. So in order to take backup of physical machine we need to have an agent installed.

(This is the Veeam Endpoint service)

image

(NOTE: Endpoint backup will also install a local SQL 2012 Express to store configuration)

with Veeam Endpoint there are threeways to take backup. Either Volume-level backup or File-level backup or the entire computer.

* Operating system data — data pertaining to the OS installed on your computer.
* Personal files — user profile folder including all user settings and data. Typically, the user
profile data is located in the Users folder on the system disk, for example, C:\Users.
* System reserved data — system data required to boot the OS installed on your computer.
* Individual folders.
* Individual computer volumes

Now when setting up Endpoint backup on a regular computer, I have the option to connect a USB drive and setup backup to it automatically

image

So when starting the wizard for the first time I get the option to create a recovery media

image

image

(NOTE: That this wizard I ran on a virtual machine with Windows 7 )

The Recovery Media is useful when I am having trouble booting my computer, if it is for instance infected with Malware.

image

So when booting from the ISO I get a Windows based recovery option, pretty neat!

image

So when configuring regular backup, as I mentioned I have three options.

image 

So I choose File-based backup, I see how much data is going to be included in the backup. I can also create filter on what should be included and so on.

image

After I am done selecting what I want to take backup of, I need to choose where to store it.
Here I can select another local drive, which might be a USB disk. Or I can select a shared folder.

NOTE: That my Veeam Backup and Replication Repository here is greyed out, this is because

image

Then I define how much restore points to retain.

image

So after I am done creating the job, I can open the dashboard from the task applet and see the progression of the job

image

Veeam also does a great job logging job status in the event viewer as well

image

image

When I want to restore specific files or folders I can initiate a restore from the task applet

image

So when I choose restore individual files I get a file explorer of the backup file.

image

It makes it pretty easy to do a restore from a specific date. What I also liked about this product is that it has auto update features and that I can define throttle backup activity when the system is busy.

Again Veeam has done a great job creating a free product, which can either be run standalone or as part of an existing Veeam infrastructure.

Stay tuned for more!

New feature in Netscaler – Admin partitions

So this was announced on Synergy earlier this year, and now just arrived in the first enhancement build which is downloadable from Citrix.com

NOTE: There is only a VPX available for XenServer but there is a firmware available which works on regular VPX.

So what is Admin Partitions?

It is a kind of Role-based access segmenting, each user has their own partition which contains their own configuration files and view and logging and so on.

So think like SDX where each department is given their own VPX which has their own SLA using their own build version and so on. Partitions works for a single appliance, so users share the same build and appliancce, but they have their own configuration and setup.

Think of it like a Windows PC, where each user has their own login and they customize their own background and change the shortcuts and so on without it affecting the other users.

So how to set it up ?

image

System –> Partition Administration –> Partitions.

image

Here we define a name for the partition and we define how much bandwidth for this partition. So this can be Citrix department (ICA-proxy) and how much bandwidth, connection limit and memory limit. So after we have created this we can go back to the partitions menu and see how it looks.

Next we can add a bridge group or VLAN to the partition abd bind it to a user

image

We can also change partitions from within the GUI from the admin gui

image

So after I changed partition I can see that I see how much dedicated this resources has.

image

And note that partitions also creates new local groups

image 

But note this allows us to partition the Netscaler into different resources and dedicated users. So we can create a partition for the Citrix guys, some for the Networking guys and for instance a partition for the Exchange guys and dedicating system resources to each department.

Stay tuned for more!

Netscaler 10.5 and Storefront 2.6

During a new setup for a customer we were using the latest build from Storefront 2.6  and latest NS build 10.5 (53) to ensure that there are no bugs and so on.

The pre-existing Storefront was setup using regular HTTP (Not recommended) but it should work just fine.

After setting up Netscaler against Storefront and adding different policies everything looked to be working fine.

Well almost… Receiver for web worked as it should we managed to authenticate and start applications as they should. But! when using Citrix Receiver (latest version) we stumbled across something funny.

After starting Citrix Receiver and entering username and password the “enter URL” dialog window popped up again

image 

annoying….

So I did as every IT-guy does, I enabled logging of Receiver and checked the logs on Storefront and doublechecking on different clients and checked that the store actually was saved in the registry.

Since my first things was that Receiver wasn’t able to store info in registry

NOTE: Citrix Receiver stores info under HKEY_CURRENT_USER
That worked as it should, then I enabled logging on Citrix Receiver and saw trough the logs there.

This is done by adding a couple of registry settings under HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Citrix\AuthManager

SDKTracingEnabled = true

TracingEnabled = true

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Citrix

dword, ReceiverVerboseTracingEnabled = 1

The logs are generated under %AppData%\Local\Citrix

but no luck there as well, everything looked as it should be, but still no luck. After that I got some tips from some colleagues that I should enable HTTPS since it was the last logical chooise

and voila! then Citrix Receiver worked as it should.

Veeam Cloud Connect for Microsoft Azure walkthrough

With the recent release of Veeam 8, I was exited to hear that Veeam also added support for Cloud Connect Against Azure. So Cloud Connect is a option for Veeam Cloud providers to offer off-site backup for their customers. So it requires that customers already have Veeam in place, but makes it easy for them to just add a “service provider” to the Veeam console and ship off-site backup to cloud provider.

So why use Azure? First of it might be as simple that you don’t have the available space/hardware/ to supply your customers. Also it might be that you don’t have adequate network infrastructure to support your customers. (NOTE that cloud connect) does not use VPN.

NOTE: That this requires that you have an existing azure account and preconfigured virtual networks and resource groups.

So how do we set this up?
First we go into the newest Azure Portal → portal.azure.com

image 

Then go into Marketplace and then search for Veeam (You can see Cloud Connect appearing there)

image

Choose create, NOTE: This will provision a virtual machine instance in Azure, and note that the default instance is a A2 which can have up to 4 data disks (4 TB of total data)  and total of 2000 IOPS in total. What kind of Disk size you want is up to you to decide. If you need more disk size or more IOPS you need to change to another instance size. 

image

After that is done you can just wait for the provisioning is complete. Now by default the template does a couple of things, firstly it spins up a VM with Veeam Cloud Connect preinstalled and it also precreates an Endpoint (port 6180) which Veeam will use to communicate and send traffic.

image

NOTE: On the top of the menu pane on the VM you need to take note of the FQDN of the VM (Since you need it later when addind the service on-premise)

Also take note that the virtual machine has an VIP if (which is by default dynamic) but will remain with the VM as long as it is allocated. The same goes with the internal IP which is this case is 10.0.0.4 but we can assign them both as an static IP address.

I can assign the internal static IP address from the portal itself.(This means that it sets it as an static DHCP allocation) I can also define an Instance IP address. By default a Virtual IP address is shared by many virtual machines inside a cloud service, but an instance IP address is a dedicated public IP for a single virtual machine.

image

So you should define them both, since if a VM goes down and changes IP-address the cloud connect will not work properly.

After you are done with the ip-addresses you can connect to the VM using RDP (This can be done from the main dashboard and choose connect)
When inside the Cloud connect setup will start automatically

image 

(and yes you need a VCP license) after the license is added it gives you an set of instructions on what do to next

image

First thing we need to do on the Azure part is to add a customer / user to allow them to authenticate and store content.

image

Add a username and password

image

Next, define what type of resources that are available to this customer. Note that by default there is a repository on the local drive C:\ (This should be changed to a data disk repository) but by default the instance has no data disks.)

image

Then you are done on the Azure part! (Note that the Azure provisioning generated a self-signed certificate) which will generate error messages when connecting from on-premise/customer side so this should be changed to a public certificate to avoid that issue.

So now that we have setup everything on the virtual machine in Azure we need to add the “service provider” gateway on our customer VM running Veeam V8.

image

image

Note that the DNS name can be found inside the dashboard of the virtual machine in Azure.

Next we need to add username and password that can be used to authenticate against the providers, and note that by default the Veeam VM in Azure uses a self-signed certificate therefore customers need to add the certificate thumbprint to verify the connection.

image
Next we see that the Cloud repository we created is available after authenticating in the service provider. Note that it is also possible to use WAN accelerators between customers and Azure. But using WAN accelerator requires more CPU and disk IO on the Azure side (therefore you should look at D-instances Azure Vms (Which has SSD diskes)

image

Now that we have added the cloud repository we are good to go, now we can just create a new copy backup job and point it to the cloud repository.

image

Workaround for Netscaler VPX and VMware ESXi 5.5 Build 2143827

This is a quick post, but Citrix has published a workaround for the trouble they have with Netsacler loosing connectivity on Vmware with the latest update.

You can find the workaround here –> http://support.citrix.com/article/CTX200278 

This is only until Citrix manage to fix the issue and includes it in a newer build of Netscaler

Følg meg

Få nye innlegg levert til din innboks.

Bli med 49 andre følgere