Stikkordarkiv: vmware

Vmware Horizon 7 announced

Earlier today I saw on a couple of blogspost that Vmware was going to announce Horizon 7 later today. So when I read the posts I was blown away in what type of features that are coming in the release.

So what’s included in the upcoming release?

  • Project Fargo (VMFork) which in essence the ability to clone a running VM on the fly, just-in-time-desktops. Doing master image updates is as simple as updating the parent virtual machine, a user will automatically get an updated desktop at next login. It is kinda like the same what we have with AppVolumes and delivering of AppStacks, but taken to a whole new level. You can read more about it here –> http://www.yellow-bricks.com/2014/10/07/project-fargo-aka-vmfork-what-is-it/ important to remember this is not like linked-clones, the virtual machines are all running and are updated on the fly. so no composer! but of course this is going to put more strain on the backend storage providers.
    also important is that this does not support NFS as of now

image

  • New Horizon Clients version 4 (With new clients for Windows, Mac and Linux, Android, iOS) with increased performance over WAN etc) also the required version if we want to use the new display protocol)
  • Updated Identity Manager (As part of the stack and will provide the authentication mechanism across the entire infrastructure using SAML)
  • Smart Policies (Customization of desktops and users identity of a running session) application blocking, PCoIP policies and such.
  • URL Content Redirection (Allows for URL redirection from within a remote session to be redirected to a local browser running on the Endpoint
  • AMD Graphics support for vSGA
  • Intel vDGA support with Intel Xeon E3
  • Improved printing experience (Reducing bandwidth and increasing speed of printing)
  • Blast Extreme (New remote display protocol which is optimized for mobile users) Which apparently has alot lower bandwidth requirements then PCoIP. It is also optimized for NVIDIA Grid. Now in terms of WAN performance, PCoIP has not been anywhere near what Citrix can deliver on using ThinWire or Framehawk so I belive that this is a good call that VMware can move ahead with their own display protocol which does more calibration on the fly.

It is going to be interesting to see how the remote display protocol compares to PCoIP and the other on the marked. And my guess is that since Blast is alot more bandwidth friendly. Also looks like they are investing more into the different aspects of the protocol itself.

PCoIP & Blast Extreme: Feature Parity
Source: http://www.vladan.fr/vmware-horizon-7-details-announced

Some other new stuff which is part of the release is the support for Horizon Air Hybrid Mode, which in essence is moving the control plane to in the Cloud (Similar to what Citrix is doing with their Workspace Cloud)

We can also look at the earlier annoucements of AppVolumes 3.0 as well, which is a perfect stack into this mix in terms of flexible application delivery, of course this is not without compromising some features, but it looks like VMware is becoming a provider of the unified stack. Just hope that they can integrate some of the management components a bit so it feels a bit more like an integrated stack.

But! it seems like Vmware has been quite busy with this release, this is also another complete story when combined with NSX and micro-segmentation in terms of delivering a secure desktop to any device, but I just hope that the display protocol is as good as they say. ill belive it when I see it Smilefjes 

Sources:

http://vthoughtsofit.blogspot.no/
http://www.vladan.fr/vmware-horizon-7-details-announced/

Office365 on Terminal server done right

So this is a blogpost based upon a session I had at NIC conference, where I spoke about how to optimize the delivery of Office365 in a VDI/RSDH enviroment.

There are multiple stuff we need to think / worry about. Might seem a bit negative, but that is not the idea just being realistic Smilefjes

So this blogpost will cover the following subjects

  • Federation and sync
  • Installing and managing updates
  • Optimizing Office ProPlus for VDI/RDS
  • Office ProPlus optimal delivery
  • Shared Computer Support
  • Skype for Buisness
  • Outlook
  • OneDrive
  • Troubleshooting and general tips for tuning
  • Remote display protocols and when to use when.

So what is the main issue with using Terminal Servers and Office365? The Distance….

This is the headline for a blogpost on Citrix blogs about XenApp best pratices

image_thumb5

So how to fix this when we have our clients on one side, the infrastructure in another and the Office365 in a different region ? Seperated with long miles and still try to deliver the best experience for the end-user, so In some case we need to compromise to be able to deliver the best user experience. Because that should be our end goal Deliver the best user experience

image_thumb1

User Access

First of is, do we need to have federation or just plain password sync in place? Using password sync is easy and simple to setup and does not require any extra infrastructure. We can also configure it to use Password hash sync which will allow Azure AD to do the authentication process. Problem with doing this is that we lose a lot of stuff which we might use on an on-premises solution

  • Audit policies
  • Existing MFA (If we use Azure AD as authentication point we need to use Azure MFA)
  • Delegated Access via Intune
  • Lockdown and password changes (Since we need change to be synced to Azure AD before the user changes will be taken into effect)

NOTE: Now since I am above average interested in Netscaler I wanted to include another sentence here, for those that don’t know is that Netscaler with AAA can in essence replace ADFS since Netscaler now supports SAML iDP. Some important issues to note is that Netscaler does not support • Single Logout profile; • Identity Provider Discovery profile from the SAML profiles. We can also use Netscaler Unified Gateway with SSO to Office365 with SAML. The setup guide can be found here

https://msandbu.wordpress.com/2015/04/01/netscaler-and-office365-saml-idp-setup/

NOTE: We can also use Vmware Identity manager as an replacement to deliver SSO.

Using ADFS gives alot of advantages that password hash does not.

  • True SSO (While password hash gives Same Sign-on)
  • If we have Audit policies in place
  • Disabled users get locked out immidietly instead of 3 hours wait time until the Azure AD connect syng engine starts replicating, and 5 minutes for password changes.
  • If we have on-premises two-factor authentication we can most likely integrate it with ADFS but not if we have only password hash sync
  • Other security policies, like time of the day restrictions and so on.
  • Some licensing stuff requires federation

So to sum it up, please use federation

Initial Office configuration setup

Secondly, using the Office suite from Office365 uses something called Click-to-run, which is kinda an app-v wrapped Office package from Microsoft, which allows for easy updates from Microsoft directly instead of dabbling with the MSI installer.

In order to customize this installer we need to use the Office deployment toolkit which basically allows us to customize the deployment using an XML file.

The deployment tool has three switches that we can use.

setup.exe /download configuration.xml

setup.exe /configure configuration.xml

setup.exe /packager configuration.xml

NOTE: Using the /packager creates an App-V package of Office365 Click-To-run and requires a clean VM like we do when doing sequencing on App-V, which can then be distributed using existing App-V infrastructure or using other tools. But remember to enable scripting on the App-V client and do not alter the package using sequencing tool it is not supported.

The download part downloads Office based upon the configuration file here we can specify bit editions, versions number, office applications to be included and update path and so on. The Configuration XML file looks like this.

<Configuration>

<Add OfficeClientEdition=»64″ Branch=»Current»>

<Product ID=»O365ProPlusRetail»>

<Language ID=»en-us»/>

</Product>

</Add>

<Updates Enabled=»TRUE» Branch=»Business» UpdatePath=»\\server1\office365″ TargetVersion=»16.0.6366.2036″/>

<Display Level=»None» AcceptEULA=»TRUE»/>

</Configuration>

Now if you are like me and don’t remember all the different XML parameters you can use this site to customize your own XML file –> http://officedev.github.io/Office-IT-Pro-Deployment-Scripts/XmlEditor.html

When you are done configuring the XML file you can choose the export button to have the XML file downloaded.

If we have specified a specific Office version as part of the configuration.xml it will be downloaded to a seperate folder and storaged locally when we run the command setup.exe /download configuration.xml

NOTE: The different build numbers are available here –> http://support2.microsoft.com/gp/office-2013-365-update?

When we are done with the download of the click-to-run installer. We can change the configuration file to reflect the path of the office download

<Configuration> <Add SourcePath=»\\share\office» OfficeClientEdition=»32″ Branch=»Business»>

When we do the setup.exe /configure configuration.xml path

Deployment of Office

The main deployment is done using the setup.exe /configure configuration.xml file on the RSDH host. After the installation is complete

Shared Computer Support

<Display Level="None" AcceptEULA="True" /> 
<Property Name="SharedComputerLicensing" Value="1" />

In the configuration file we need to remember to enable SharedComputerSupport licensing or else we get this error message.

image_thumb11

If you forgot you can also enable is using this registry key (just store it as an .reg file)

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\15.0\ClickToRun\Configuration]
«InstallationPath»=»C:\\Program Files\\Microsoft Office 15»
«SharedComputerLicensing»=»1

Now we are actually done with the golden image setup, don’t start the application yet if you want to use it for an image. Also make sure that there are no licenses installed on the host, which can be done using this tool.

cd ‘C:\Program Files (x86)\Microsoft Office\Office15’
cscript.exe .\OSPP.VBS /dstatus

image_thumb31

This should be blank!

Another issue with this is that when a user starts an office app for the first time he/she needs to authenticate once, then a token will be stored locally on the %localappdata%\Microsoft\Office\15.0\Licensing folder, and will expire within a couple of days if the user is not active on the terminalserver. Think about it, if we have a large farm with many servers that might be the case and if a user is redirected to another server he/she will need to authenticate again. If the user is going against one server, the token will automatically refresh.
NOTE: This requires Internet access to work.

And important to remember that the Shared Computer support token is bound to the machine, so we cannot roam that token around computers or using any profile management tool.

But a nice thing is that if we have ADFS setup, we can setup Office365 to automatically activate against Office365, this is enabled by default. So no pesky logon screens.

Just need to add the ADFS domain site to trusted sites on Internet Explorer and define this settings as well

Automatic logon only in Intranet Zone

image

Which allows us to basically resolve the token issue with Shared Computer Support Smilefjes

Optimizing Skype for Buisness

So in regards to Skype for Buisness what options do we have in order to deliver a good user experience for it ? We have four options that I want to explore upon.

  • VDI plugin
  • Native RDP with UDP
  • Natnix PCoIP
  • Native ICA (w or without audio over UDP)
  • Local app access
  • HDX Optimization Pack 2.0

Now the issue with the first one (which is a Microsoft plugin is that it does not support Office365, it requires on-premises Lync/Skype) another issue that you cannot use VDI plugin and optimization pack at the same time, so if users are using VDI plugin and you want to switch to optimization pack you need to remove the VDI plugin

ICA uses TCP protcol works with most endpoints, since its basically running everyone directly on the server/vdi so the issue here is that we get no server offloading. So if we have 100 users running a video conference we might have a issue Smilefjes If the two other options are not available try to setup HDX realtime using audio over UDP for better audio performance. Both RDP and PCoIP use UDP for Audio/Video and therefore do not require any other specific customization.

But the problems with all these are that they make a tromboning effect and consumes more bandwidth and eats up the resources on the session host

image_thumb7

Local App from Citrix access might be a viable option, which in essence means that a local application will be dragged into the receiver session, but this requires that the enduser has Lync/Skype installed. This also requires platinum licenses so not everyone has that + at it only supports Windows endpoints…

The last and most important piece is the HDX optimization pack which allows the use of server offloading using HDX media engine on the end user device

And the optimization pack supports Office365 with federated user and cloud only users. It also supports the latest clients (Skype for buisness) and can work in conjunction with Netscaler Gateway and Lync edge server for on-premises deployments. So means that we can get Mac/Linux/Windows users using server offloading, and with the latest release it also supports Office click-to-run and works with the native Skype UI

So using this feature we can offload the RSDH/VDI instances from CPU/Memory and eventually GPU directly back to the client. And Audio/video traffic is going to the endpoint directly and not to the remote session

image_thumb51

Here is a simple test showing the difference between running Skype for buisness on a terminal server with and without HDX Optimization Pack 2.0

Permalink til innebygd bilde

Here is a complete blogpost on setting up HDX Optimization Pack 2.0 https://msandbu.wordpress.com/2016/01/02/citrix-hdx-optimization-pack-2-0/

Now for more of the this part, we also have Outlook. Which for many is quite the headache…. and that is most because of the OST files that is dropped in the %localappdata% folder for each user. Office ProPlus has a setting called fast access which means that Outlook will in most cases try to contact Office365 directly, but if the latency is becoming to high, the connection will drop and it will go and search trough the OST files.

Optimizing Outlook

Now this is the big elefant in the room and causes the most headaches. Since Outlook against Office365 can be setup in two modes either using Cached mode and the other using Online mode. Online modes uses direct access to Office365 but users loose features like instant search and such. In order to deliver a good user experience we need to compromise, the general guideline here is to configure cached mode with 3 months, and define to store the OST file (Which contains the emails, calender, etc) and is typically 60-80% than the email folder) on a network share. Since these OST files are by default created in the local appdata profile and using streaming profile management solutions aren’t typically a good fit for the OST file.

. Important to note that Microsoft supports having OST files on a network share, IF! there is adequate bandwidth and low latency… and only if there is one OST file and the users have Outlook 2010 SP1

NOTE: We can use other alternatives such as FSLogix, Unidesk to fix the Profile management in a better way.

Ill come back to the configuration part later in the Policy bits. And important to remember is to use Office Outlook over 2013 SP1 which gives MAPI over HTTP, instead of RCP over HTTP which does not consume that much bandwidth.

OneDrive

In regards to OneDrive try to exclude that from RSDH/VDI instances since the sync engine basically doesnt work very well and now that each user has 1 TB of storagee space, it will flood the storage quicker then anything else, if users are allowed to use it. Also there is no central management capabilities and network shares are not supported.

There are some changes in the upcoming unified client, in terms of deployment and management but still not a good solution.

You can remove it from the Office365 deployment by adding  this in the configuration file.

<ExcludeApp ID=»Groove» />

Optimization and group policy tuning

Now something that should be noted is that before installing Office365 click-to-run you should optimize the RSDH sessions hosts or the VDI instance. A blogpost which was published by Citrix noted a 20% in performance after some simple RSDH optimization was done.

Both Vmware and Citrix have free tools which allow to do RSDH/VDI Optimization which should be looked at before doing anything else.

Now the rest is mostly doing Group Policy tuning. Firstly we need to download the ADMX templates from Microsoft (either 2013 or 2016) then we need to add them to the central store.

We can then use Group Policy to manage the specific applications and how they behave. Another thing to think about is using Target Version group policy to manage which specific build we want to be on so we don’t have a new build each time Microsoft rolls-out a new version, because from experience I can tell that some new builds include new bugs –> https://msandbu.wordpress.com/2015/03/09/trouble-with-office365-shared-computer-support-on-february-and-december-builds/

image

Now the most important policies are stored in the computer configuration.

Computer Configuration –> Policies –> Administrative Templates –> Microsoft Office 2013 –> Updates

Here there are a few settings we should change to manage updates.

  • Enable Automatic Updates
  • Enable Automatic Upgrades
  • Hide Option to enable or disable updates
  • Update Path
  • Update Deadline
  • Target Version

These control how we do updates, we can specify enable automatic updates, without a update path and a target version, which will essentually make Office auto update to the latest version from Microsoft office. Or we can specify an update path (to a network share were we have downloaded a specific version) specify a target version) and do enable automatic updates and define a baseline) for a a specific OU for instance, this will trigger an update using a built-in task schedulerer which is added with Office, when the deadline is approaching Office has built in triggers to notify end users of the deployment. So using these policies we can have multiple deployment to specific users/computers. Some with the latest version and some using a specific version.

Next thing is for Remote Desktop Services only, if we are using pure RDS to make sure that we have an optimized setup.  NOTE: Do not touch if everything is working as intended.

Computer Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Remote Session Enviroment

  • Limit maximum color depth (Set to16-bits) less data across the wire)
  • Configure compression for RemoteFX data (set to bandwidth optimized)
  • Configure RemoteFX Adaptive Graphics ( set to bandwidth optimized)

Next there are more Office specific policies to make sure that we disable all the stuff we don’t need.

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Miscellaneous

  • Do not use hardware graphics acceleration
  • Disable Office animations
  • Disable Office backgrounds
  • Disable the Office start screen
  • Supress the recommended settings dialog

User Configuration –> Administrative Templates  –>Microsoft Office 2013 –> Global Options –> Customizehide

  • Menu animations (disabled!)

Next is under

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> First Run

  • Disable First Run Movie
  • Disable Office First Run Movie on application boot

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Subscription Activation

  • Automatically activate Office with federated organization credentials

Last but not least, define Cached mode for Outlook

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Account Settings –> Exchange –> Cached Exchange Modes

  • Cached Exchange Mode (File | Cached Exchange Mode)
  • Cached Exchange Mode Sync Settings (3 months)

Then specify the location of the OST files, which of course is somewhere else

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Miscellanous –> PST Settings

  • Default Location for OST files (Change this to a network share

Network and bandwidth tips

Something that you need to be aware of this the bandwidth usage of Office in a terminal server enviroment.

Average latency to Office is 50 – 70 MS

• 2000 «Heavy» users using Online mode in Outlook
About 20 mbps at peak

• 2000 «Heavy» users using Cached mode in Outlook
About 10 mbps at peak

• 2000 «Heavy» users using audio calls in Lync About 110 mbps at peak

• 2000 «Heavy» users working Office using RDP About 180 mbps at peak

Which means using for instance HDX optimization pack for 2000 users might “remove” 110 mbps of bandwidth usage.

Microsoft also has an application called Office365 client analyzer, which can give us a baseline to see how our network is against Office365, such as DNS, Latency to Office365 and such. And DNS is quite important in Office365 because Microsoft uses proximity based load balancing and if your DNS server is located elsewhere then your clients you might be sent in the wrong direction. The client analyzer can give you that information.

image_thumb3

(We could however buy ExpressRoute from Microsoft which would give us low-latency connections directly to their datacenters, but this is only suiteable for LARGER enterprises, since it costs HIGH amounts of $$)

image

But this is for the larger enterprises which allows them to overcome the basic limitations of TCP stack which allow for limited amount of external connection to about 4000 connections at the same time. (One external NAT can support about 4,000 connections, given that Outlook consumes about 4 concurrent connections and Lync some as well)

Because Microsoft recommands that in a online scenario that the clients does not have more then 110 MS latency to Office365, and in my case I have about 60 – 70 MS latency. If we combine that with some packet loss or adjusted MTU well you get the picture Smilefjes 

Using Outlook Online mode, we should have a MAX latency of 110 MS above that will decline the user experience. Another thing is that using online mode disables instant search. We can use the exchange traffic excel calculator from Microsoft to calculate the amount of bandwidth requirements.

Some rule of thumbs, do some calculations! Use the bandwidth calculators for Lync/Exchange which might point you in the right direction. We can also use WAN accelerators (w/caching) for instance which might also lighten the burden on the bandwidth usage. You also need to think about the bandwidth usage if you are allow automatic updates enabled in your enviroment.

Troubleshooting tips

As the last part of this LOOONG post I have some general tips on using Office in a virtual enviroment. This is just gonna be a long list of different tips

  • For Hyper-V deployments, check VMQ and latest NIC drivers
  • 32-bits Office C2R typically works better then 64-bits
  • Antivirus ? Make Exceptions!
  • Remove Office products that you don’t need from the configuration, since this add extra traffic when doing downloads and more stuff added to the virtual machines
  • If you don’t use lync and audio service (disable the audio service! )
  • If using RDSH (Check the Group policy settings I recommended above)
  • If using Citrix or VMware (Make sure to tune the polices for an optimal experience, and using the RSDH/VDI optimization tools from the different vendors)
  • If Outlook is sluggish, check that you have adequate storage I/O to the network share (NO HIGH BANDWIDTH IS NOT ENOUGH IF STORED ON A SIMPLE RAID WITH 10k disks)
  • If all else failes on Outlook (Disable MAPI over HTTP) In some cases when getting new mail takes a long time try to disable this, used to be a known error)

Remote display protocols

Last but not least I want to mention this briefly, if you are setting up a new solution and thinking about choosing one vendor over the other. The first of is

  • Endpoint requirements (Thin clients, Windows, Mac, Linux)
  • Requirements in terms of GPU, Mobile workers etc)

Now we have done some tests, which shown the Citrix has the best feature across the different sub protocols

  • ThinWire (Best across high latency lines, using TCP works over 1800 MS Latency)
  • Framehawk (Work good at 20% packet loss lines)

While PcoIP performs a bit better then RDP, I have another blogpost on the subject here –> https://msandbu.wordpress.com/2015/11/06/putting-thinwire-and-framehawk-to-the-test/

ICA vs PCOIP

First of let me start by stating that the subject of the blogpost is purely to get more viewers… But there is some truth to it, over the last weeks there has been alot of people talking about RDP / ICA / PCOIP and that the protocol wars are over.

There are multiple articles on the subject, but this one started the idea –> http://www.brianmadden.com/blogs/guestbloggers/archive/2015/11/25/are-the-display-protocol-wars-finally-over.aspx

And here as well –> https://twitter.com/michelroth/status/670288837730541568

So I figured since me and a good friend of mine @Mikael_modin have done alot of testing on Framehawk vs ThinWire (and with RDP in the mix) https://msandbu.wordpress.com/2015/11/06/putting-thinwire-and-framehawk-to-the-test/

Where we did a test based upon different packet loss parameters and measured is using uberagent splunk and netbalancer to see how it performed. The test consists about 5 minutes for each test where we did 1 minute of idle workload, 1 minute of web browsing using Chrome on a newspaper, 1 minute of PDF browning and zooming, 1 minute of word typing, Avengers Youtube trailer. The test was to be contucted on the same virtual infrastructure with the same amount of resources available to the guest VM (Windows 10) and no other firewall releated issues, and with just one connection server with a VDI instance. So this is purely a test of resource usage and bandwidth and how it adapts to network changes. There are of course other factors which kick in terms of performance +/- and other things.

Another thing is that I am by means no expert of View, if someone disagreed with the data or if I have stated something wrong please inform me of it.

So therefore I figured it was about time to put PCoIP to the test as well. Now I know that PCoIP has different protocol usage (HTML5/Blast and its own for GPU) but I am testing the native PCoIP Protocol)  and no change besides the default setup.

For those unknowning, PCoIP uses TCP & UDP port 4172, where TCP is used for session handshake and UDP is used as the transport of session data. Now the issue with UDP is that its hard to control the traffic flow. PCoIP is a “quite” chatty protocol

image

which means that a better experience (if the line can handle it) so it will be interesting to see how it handles congestion)
So from the initial test (with no limits what so ever)

image

It consumed about 168 MBs of banwidth, with a max amount of 933KB/s which was mostly during the Youtube showing of Chrome.

The View agent only used about 7% average CPU during the test.

image

The max amount of CPU at one point was about 23% which was during the youtube testing

image

Not such a heavy user of RAM as well

image

During our test using Framehawk and ThinWire on the same test we could see that Framehawk for instance used about 224 MB/s of bandwidth with a max of 1,2 MBps and oddly enough this was during the PDF scrolling and zooming which generated the most bandwidth.

image

On a side note, framehawk delivered the best experience when it came to the PDF part, it was lightning fast! Thinwire on the other hand used only 47 MBs of bandwidth, most bandwidth was during the Youtube part. Thinwire used the same amount of CPU usage.

image

Now as part of the same test we also turned up the packet loss to a degree that it would be able to reflect upon a real-life scenario. So at 5% packet loss I saw alot of changes

Now PCoIP only used about 38 MBs of banwidth, looking kinda similar to thinmwire usage… But this was quite noticeable from an end user perspective. Not quite sure if there is a built-in mechanism to handle QoS under packet loss.

image

Now when we did this with ThinWire and Framehawk during the same test we got the following results (11 MBs bandwidth)

clip_image025

Framehawk (used about to 300 MBs bandwidth) I’m guessing that it got in ass in gear when it noticed packet loss and therefore tried to compentanse by trying to max my banwidth available.

clip_image020

So in terms of packet loss, Framehawk handles it alot better then PCoIP, and ICA which uses TCP still manages to give a decent user experience but because of the TCP rules and congestion algoritms it not really as useable. Now since there was packet loss and hence less banwidth to transmit, the CPU had less to do.

image

With 10% Packet loss, we could also see a good decrease in bandwidth usage. Which means that it had a hard time keeping up with want I wanted to do, now it was down to 27MB of bandwidth usage, and struggled during the PDF and browsing wasn’t really good.

image

So as a first quick summarization,

  • the View agent is “lighter” meaning that is uses less CPU and memory on each host.
  • Its a chatty protocol, which I’m guessing that it work well in a highly congested network, ICA is also chatty but since it uses TCP it can adapt to the congestion
  • The plus side to it that since there is a steady flow of packets it delivers a good user experience.
  • It cannot handle packet loss as well as Framehawk, it was better then Thinwire on packet loss, but Thinware was never aimed for lossy networks.

Conclusion: Well I’m not gonna post any conclusions related to this post, since in some social media circles..
image

Well let’s just say that you can draw your own conclusion from this blogpost and ill just end the post with the picture of these two cars and you can point out which is which

New job! Systems Engineer at Exclusive Networks (BigTec)

So I have been on a job hunt for some time now, and I’m quite picky on what job to take because of a lot personal stuff happening which has put alot of strain on me, and that moving two hours away from Oslo to the middle of nowhere in Norway makes thing much more difficult to do from a job perspective.

Even thou, I have now started at Exclusive Networks (BigTec) as a System Engineer..

So what will I be doing there? (Firstly BigTec which is the area I will be focusing on) is a part of Exclusive Networks which is a value add distributor focusing on datacenter change.

Well from a techincal perspective I will be focusing on the different vendors which are part of the BigTec portfolio. Such as Nutanix, vArmour, VMTurbo, SilverPeak and Arista.

nutantix-vendor-logovarmour-vendor-logovmturbo_416x416-300x300SilverPeak-New-Logoarista

So this is not my regular milk and butter… Since I have been focusing on Microsoft related technology for like forever, but for my part It will be a good thing to expand my horizon to new products and other aspects of IT, (and this is most likely going to affect my blogpost forward as well, you have been warned!) and moving more towards pure datacenter releated technologies and security as well.

If you want to know more about what we are doing, head on over to our website http://bit.ly/1PtizYx

Azure Site Recovery Preview setup for Vmware

So a couple of days ago, Microsoft announced the preview for site recovery for physical and Vmware servers. Luckily enough I was able to get access to the preview pretty early. Now for those who don’t know the site recovery feature is built upon the InMage Scout suite that Microsoft purchased a while back. About 6 months back, Microsoft annouced the Migration Accelerator suite which was the first Microsoft branding of InMage but now they have built it into the Microsoft Azure portal, but the architecture is still the same. So this blog will explain how the the different components operate and how it works and how to set it up.

Now there are three different components for a on-premise to Azure replication of virtual machines. There is the

* Configuration Server (Which is this case is Azure VM which is used for centralized management)

* Master Target (User as a repository and for retention, recives the replicated data)

* Process Server (This is the on-premise server which actually does the data moving. It caches data, compresses and encrypts it using a passphrase we create and moves the data to the master target which in turn moves it to Azure.

Now when connecting this to a on-premise site the Process Server will push install the InMage agent on every virtual  machines that it want to protect. The InMage agent will then do a VSS snapshot and move the data to the Process Server which will in turn replicate the data to the master target.

So when you get access to the preview, create a new site recovery vault

image

In the dashboard you now have the option to choose between On-premise site with Vmware and Physical computer to Azure

image

First we have to deploy the configuration server which the managment plane in Azure. So if we click Deploy Configuration Server this wizard will appear which has a custom image which is uses to deploy a Configuration Server

image

This will automatically create an A3 instance, running a custom image (note it might take some time before in appers in the virtual machine pane in Azure)  You can look in the jobs pane of the recovery vault what the status is

image

When it is done you can go into the virtual machine pane and connect to the Configuration Manager server using RDP. When in the virtual machine run the setup which is located on the desktop

image

When setting up the Confguration Manager component it requires the vault registration key (Which is downloadable from the Site Recovery dashboard)

image

Note when the configuration manager server component is finished innstalling it wil present you with a passphrase. COPY IT!! Since you will use it to connect the other components.

image

Now when this is done the server should appear in the Site Recovery under servers as a configuration manager server

image

Next we need to deploy a master target server. This will also deploy in Azure (and will be a A4 machine with a lot of disk capaticy

image

(The virtual machine will have an R: drive where it stores retention data) it is about 1TB large.

The same goes here, it will generate a virtual machine which will eventually appear in the virtual machine pane in Azure, when it is done connect to it using RDP, it will start a presetup which will generate a certificate which allows for the Process serer to connect to it using HTTPS

image

Then when running the wizard it will ask for the IP-address (internal on the same vNet) for the configuration manager server and the passphrase. In my case I had the configuration manager server on 10.0.0.10 and the master server on 10.0.0.15. After the master server is finished deployed take note of the VIP and the endpoints which are attached to it.

image

Now that we are done with the Azure parts of it we need to install a process server. Download the bits from the azure dashboard and install it on a Windows Server (which has access to vCenter)

image

Enter the VIP of the Cloud service and don’t change the port. Also we need to enter the passphrase which was generated on the Configuration Manager server.

Now after the installastion is complete it will ask you to download the Vmware CLI binares from Vmware

image

Now this is for 5.1 (but I tested it against a vSphere 5.5 vcenter and it worked fine) the only pieces it uses the CLI binaries for are to discover virtual machines on vCenter. Rest of the job is using agents on the virtual machines.

Now that we are done with the seperate components they should appear in the Azure portal. Go into the recovery vault, servers –> Configuration manager server and click on it and properties.

image

Now we should see that the different servers are working. image

Next we need to add a vCenter server from the server dashboard.

image

Add the credentials and IP-adress and choose what Process Server is to be used to connect to the on-premise vCenter server.

After that is done and the vCenter appears under servers and connected you can create a protection group (and then we add virtual machines to it)

image

image

Specify the thresholds and retention time for the virtual machines that are going to be in the protection group.

image

Next we we need to add virtual machines to the group

image

Then choose from vCenter what virtual machines to want to protect

image

Then you need to specify which resources are going to be used to repllicate the target VM to Azure

image

And of course administrator credentials to remote push the InMage mobility agent to the VM

image

After that the replication will begin

image

image

And you can see that on the virtual machine that the InMage agent is being installed.

image

And note that the replication might take some time depending on the type of bandwidth available.

General purpose Windows Storage spaces server

So after someones request I decided to write a blogpost about thisSmilefjes  We needed a new storage server in our lab enviroment. Now we could have bought a all purpose SAN or NAS, but we decided to use regular Windows Server features with Storage Spaces, why? Because we needed something that supported our protocol needs (iSCSI, SMB3 and NFS 4) and Microsoft is putting alot of effort into Storage spaces and with the features that are coming in vNext it becomes even more awesome!

So we specced a Dell R730 with alot of SAS disks and setup storage spaces with mirroring/striping so we had 4 disk for each pool and 10 GB NIC for each resource.

So after we setup each storage pool, we setup a virtual disk. One intended for iSCSI (Vmware) and the other Intended for NFS (XenServer) lastly we had one two-disk mirror which was setup for (SMB 3.0) so since this is a lab enviroment it was mainly for setting up virtual machines.

Everything works like a charm, one part that was a bit cumbersome was the NFS setup for XenServer it requires access by UID/GUID

image

The performance is what you would expect from two-way striping set on SAS 10k drives. (column size set to 2 and interleave is 64kb)

image

Since we don’t have any SSD disks in our setup we don’t get the benefit of tiering and therefore have a higher latency since we don’t have a storage controller cache and so on.

Now for Vmware we just setup PernixData FVP infront of our virtual machines running on ESX, that gives us the performance benefit but still gives ut the storage spaces that the SAS drivers provide.

Now that’s a hybrid approach Smilefjes

Upcoming events and stuff

There’s alot happening lately and therefore there has been a bit quiet here on this blog. But to give a quick update on what’s happening!

In february I just recently got confirmation that I am presenting two session at NIC conference (Which is the largest IT event for IT-pros in scandinavia) (nicconf.com) Here I will be presenting 2 (maybe 3) sessions.

* Setting up and deploying Microsoft Azure RemoteApp
* Delivering high-end graphics using Citrix, Microsoft and VMware

One session will be primarly focused on Microsoft Azure RemoteApp where I will be showing how to setup RemoteApp in both Cloud and Hybrid and talk a little bit about what kind of use cases it has. The second session will focus on delivering high-end graphics and 3d applications using RemoteFX (using vNext Windows Server), HDX and PCoIP and talk and demo abit about how it works, pros and cons, VDI or RDS and endpoints so my main objective is to talk about how to deliver applications and desktops from cloud and on-premise…

And on the other end, I have just signed a contract with Packt Publishing to write another book on Netscaler, “Mastering Netscaler VPX” which will be kind of a follow up of my existing book http://www.amazon.co.uk/Implementing-Netscaler-Vpx-Marius-Sandbu/dp/178217267X/ref=sr_1_1?ie=UTF8&qid=1417546291&sr=8-1&keywords=netscaler

Which will focus more in depth of the different subjects and focused on 10.5 features as well.

I am also involved with a community project I started, which is a free eBook about Microsoft Azure IaaS where I have some very skilled norwegians with me to write this subject. Takes some time since Microsoft is always adding new content there which needs to be added to the eBook as well.

So alot is happening! more blogsposts coming around Azure and Cloudbridge.