Azure Stack networking overview and use of BGP and VXLAN

Now after been dabling with Azure Stack for some time since the preview, there has been one thing that has been bugging me, which is the networking flow. Hence I decided to create an overview of the network topology and how things are connected and how traffic flow is.

Important to remember that Azure Stack is using much of the features in 2016 including SLB, BGP, VXLAN and so on.
Most of the management machines in the Azure Stack POC is placed in the vEthernet 1001 connection on the Hyper-V host, and is connected to the vSwitch CCI_External.
The mangement machines are located on scopes.
Now with this updated chart, we can see that each tenant has its own /32 BGP route which is attached to the MuxVM which acts as an load balancer


When traffic goes from the clientIP it is encapsulated using VXLAN and then goes to the MuxVM (Using its public IP address) In my case its (Which is part of the PAHostvNIC which is then routed to the MuxVM using VXLAN encapsulation (Which uses UDP) and then forwarded to the BGPVM and then out the NatVM and out to the world.


On the other hand we have NATVM and the CLIENTVM which are placed on the 192.168.200 scope. The network can communicate via the BGPVM which has two-armed configuration. Which acts as the gateway between 192.168.100 network and the network. Now the funny thig is that NATVM just acts like a gateway for the external network in, it has RRAS installed and since it is directly connected to both networks it allows access from externally. Now BGPVM also has RRAS installed, but we cannot see that using the RRAS console, we need to see it in PowerShell, and also BGPVM (as stated) has a BGP route setup to the MuxVM. The MuxVM acts as an load balancer for the BGVM using BGP to advertise the VIP to the router using a /32 route.

So for instance on the ClientVM if we open a connection to Portal.Azurestack.local (Which has an IP of The traffic flow will go like this.

ClientVM –> NATVM –> BGPVM –> (BGP ROUTE PEER) –> MuxVM –> PortalVM

Now remember that the configuration of BGP and LB and the host is done by the network controller

SLB infrastructure
For a virtual switch to be compatible with SLB, you must use Hyper-V Virtual Switch Manager or Windows PowerShell commands to create the switch, and then we must have the Azure Virtual Filtering Platform (VFP) for the virtual switch enabled.

So for those that are looking into Windows Server 2016, Look into the networking stack of 2016 its bloody HUGE!

#azure, #azure-stack

What is Microsoft doing with RDS and GPU in 2016? and what are VMware and Citrix doing?

So it was initially labed Server 2016, for then I forgot an important part of it, which ill come back to later.

This year, Microsoft is most likely releasing Windows Server 2016 and with it a huge number of new features like Containers, Nano, SDN and so on.

But what about RDS? Well Microsoft is actually doing a bunch there,

  • RemoteFX vGPU support for GEN2 virtual machines
  • RemoteFX vGPU support for RDS server
  • RemoteFX vGPU with OpenGL support
  • Persional Session Desktops (Allows for an RSDH host per user)
  • AVC 444 mode (
  • Enhancements to RDP 10 protocol (Less bandwidth consuming)
  • Clientless experience (HTML 5 support is now in tech preview for Azure RemoteApp) which will also most likely be ported for on-premises solutions as well)
  • Discrete Device Assigment (Which in essence will be GPU-passtrough)

So there is all these stuff happening in terms of GPU enhancements and performance increase of the protocol and of course delivering hardware offloading uses the encoder.

Another important piece is the support for Azure which is coming with the N-series, which is DDA (GPU-passtrough) in Azure which will allow us to setup a virtual machine with dedicated GPU graphics running for a per hour price when we need it! and also in some cases can be configured for an RDMA backbone where we have need for high compute capacity for deep-learning. This N-series will be powered by NVDIA and K80 & M60.

So is still RDS the way to go in terms of full-scale deployment ? Can be, RDS has gotten from a dark place to become a good enough solution (even thou it has its limitations) and the protocol itself has gotten alot better (even do I miss alot of tuning capabilities for the protocol itself..

Now VMware and Citrix are also doing their things, with a lot of heavy-hitting being done at both sides, but also this again gives ut alot of new feature since both companies are investing alot in their EUC stack.

The interesting part is that Citrix is not putting all their eggs in the same basket, with now adding support for Azure as well (Which already includes support for ESXi, Amazon, Hyper-V and so on), meaning that when Microsoft releases the N-series as well, Citrix can easily integrate to the N-series to deliver the GPU using their own stack which has alot of advantages over RDS. Horizon with GPU usage is limited to running on ESXi.

VMware on the other hand is focusing on a deep partnership with Nvidia and also moving ahead with Horizon Air Hybrid (which will be a kinda Citrix Workspace Cloud setup) and also VMware is doing ALOT on their Stack

  • AppVolumes
  • JIT desktops
  • User Enviroment Manager

Now 2016 is going to be an interesting year to see how these companies are going to evolve and how they are going to drive the partners moving forward.

#azure, #citrix, #hyper-v, #microsoft, #nvidia, #vmware

Getting started with Web based server management tools in Azure

Yesterday, Microsoft released a public public of some tools that Jeffrey Snover showed of at Microsoft Ignite last year, which was in essence basically just Server manager from within the Azure portal.

This tools is aimed for its first release to manage Windows Server 2016 servers, it can manage both Azure virtual machines and machines on-prem. So some of its capabilities:View and change system configuration

  • View performance across various resources and manage processes and services
  • Manage devices attached to the server
  • View event logs
  • View the list of installed roles and features
  • Use a PowerShell console to manage and automate


So what we do is that we deploy a Server Manager Gateway which we want to manage our virtual machines (Remember that the Server Gateway needs to have an internet connection)

NOTE: If you want to deploy the Gateway feature on 2012 server you need to have WMF 5 installed, which you can fetch here –> WMF 5.0:

So when we want to deploy –> Go into Azure –> New –> Server Management Tools –> Marketplace image

then we need to define the machine we want to connect to (Internal addresses, IPv4, IPv6 and FQDN)
So for the first run we need to create a gateway as well. If we want to add multiple servers that we want to manage we need to run this wizard again but then choose an existing gateway, for instance.

After we have created the instance we need to download the Gateway binaries and install on our enviroment


Then run the download from within the enviroment. Also important that if we want to manage non-domain based machines we need to run some parameters to add trusted hosts and such, as an example

winrm set winrm/config/client @{ TrustedHosts=»″ }

REG ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1

NETSH advfirewall firewall add rule name=»WinRM 5985″ protocol=TCP dir=in localport=5985 action=allow (If you want specify firewall rules)

After you have installed the firewall rules. We need to specify credentials


After that is done we can now manage the machine from within Azure.


#azure, #server-manager, #web-mased-server-management

Windows Azure Stack–What about the infrastructure Story?

There is no denying that Microsoft Azure is a success story, from being the lame silverlight portal with limited capabilities that it was to become a global force to be reckoned with in the cloud marketspace.

Later today Microsoft is releasing their first tech preview of their Azure Stack. Which allow us to bring the power of Azure platform to our pwn datacenters. It brings the same consistent UI and feature set of Azure resource manager which allows us to use the same tools and resource we have used in Azure against our own local cloud.

This of course will allow large customers and hosting providers to deliver Azure platform from their own datacenter. The idea seems pretty good thou. But what is actually Azure Stack ? It only deliver half of the promise of a Cloud like infrastructure. So I would place Azure stack within the category of cloud management platform. Since it is giving us the framework and portal experience

Now when we eventually have this setup and configured, we are given some of the benefits of the cloud which are

  • Automation
  • Self-Service
  • A common framework and platform to work with

Now if we look at the picture above there are some important things we need to think about in terms of fitting within the cloud aspect which is the computer fabric / network fabric and storage fabric which is missing from the Microsoft story. Of course Microsoft is a software company, but they are moving forward with their CPS solution with Dell and moving a bit towards the hardware space, but no where close yet.

When I think about Azure I also think about the resources which are beneath, they are always available, non-silo based and can scale up and down as I need to. Now if we think about the way that Microsoft has built their own datacenters there are no SAN archietecture at all, just a bunch of single machines with local storage with using software as the way to connect all this storage and compute into a large pool of resources, which is the way it should be since the SAN architecture just cannot fit into a full cloud solution. This is also the way it should be for an on-premises solution. If we were to deploy a Azure CloudStack to deliver the benefits of a cloud solution, the infrastructure should reflect that. As of right now Microsoft cannot give a good enough storage/compute solution with Storage Spaces in 2012 R2 since there are limits to the scale, and points of failure which a public cloud does not have.

Now Nutanix are one of the few providers which deliver support for Hyper-V and SMB 3.0 and does not have any scale limits and have the same properties as a public cloud solution. It agreegates all storage on local drives within each node into a pool of storage and with redundancy in all layers including an REST API which can easily integrate into Azure Stack, I can easily see that as the best way to deliver an on-premises cloud solution and a killer-combination.

#azure, #azure-stack, #hci, #nutanix, #windows-server-2016

Setting up XenDesktop 7.7 against Microsoft Azure

Starting of the new year with a long awaited feature on my part, setting up integration between XenDesktop and Microsoft Azure which is now a supported integration in 7.7 which was released now a week ago. This integration allow us to provision virtual machines directly from Studio. NOTE: Important to note however that XenDesktop as of now only supports V1 (Classic) virtual machines in Azure, so no Resource Groups yet, which might make it a bit confiusing for some but ill try to cover it as good as I can.

But a good thing with this is that we can either setup XenDesktop in a hybrid setting where we have the controller and studio running from our local infrastructure or that we are running everything in Azure which is also another setup.

Now after setting up XenDesktop 7.7 you have a new option when setting up a new connection now, you need to get publish information from Azure before continuing this wizard, that can be downloaded from


Important that when downloading a publish profile that the subcribtion contains a virtual network (Classic virtual networking) within the region we choose later in the wizard, or else you will not be able to continue the wizard.

This can be viewed/created from the new portal under the “classic” virtual network objects


Now after verifying the connection profile you will get an option of different regions available within the subscription.


After choosing a region the wizard will list out all available virtual networks within the region, and will by default choose a subnet which has valid IP-range setup.
NOTE: The other subnet is used for Site-to-site VPN and should not be chosed in the wizard.


This part just defines which virtual networks the provisioned machines are going to use. So after we are done with the wizard we can get started with the provisioning part. Now in order to use MCS to create a pool of virtual machines in Azure we need to create an master image first. This can be done by creating a virtual machine within Azure, installing the VDA, doing any optimization, installing applications and doing sysprep and shutting down the virtual machines. Then we need to run PowerShell to capture the image. The reason for this is that the portal does not support capturing images in a state called specialized.

NOTE: A simple way to upload the VDA agent to the master image virtual machine is by using for instance Veeam FASTSCP for Azure, which uses WinRM to communicate and be able to download and upload files to the virtual machine.


DONT INSTALL ANYTHING SQL related on the C: drive (Since it uses a read/write cache which might end up with a corrupt database, and don’t install anything on the D: drive since this is a temporary drive and will be purged during a restart.

A specialized VM Image is meant to be used as a “snapshot” to deploy a VM to a good known point in time, such as checkpointing a developer machine, before performing a task which may go wrong and render the virtual machine useless.  It is not meant to be a mechanism to clone multiple identical virtual machines in the same virtual network due to the Windows requirement of Sysprep for image replication.


ImageName = the image name after the convertion

Name = virtual machine name

ServiceName = Cloud service name

Also important that the vmimage HAS NOT other data disks attached to it as well. After the command is done you can view the image within the Azure Portal and you can see that is has the property specialized


Also with this you also now have a master image which you just need to allocate and start when the need for a new update to the master image is needed.


So now that the image is in place, we can start to create a machine catalog. When creating a catalog, Studio will try to get all specialized images from the region that we selected


Then we can define what kind of virtual machines that we can create.


NOTE: Citrix supports a max of 40 virtual machines as of now)

Basic: Has a limit of 300 IOPS pr disk

Standard: Has a limit of 500 IOPS pr disk, newer CPU.

We can also define multiple NIC to the virtual machines, if we have any and select what kind of virtual network it should be attached to. Note that the wizard also defines computer accouts in Active Directory like regular MCS setup, so in order to do that we need to have either a S2S VPN setup so the virtual machines can contact AD or that we have a full Azure setup( site to site setup here –>  After that we can finish the wizard and Studio will start to provision the virtual machines.

NOTE: This takes time!


Eventually when the image is finished creating the virtual machine you will be able to access the virtual machines from a IP from within the Azure region. Stay tuned for a blogpost, involving setting up Azure and Netscaler integration with 7.7

#azure, #citrix, #microsoft-azure, #xendesktop, #xendesktop-7-7

New Azure backup “agent”

Today I was notified of a new Azure backup agent which was released on Azure and on the download center. As of recently Microsoft did not have support for backing up on-premises Sharepoint, SQL, Exchange, Hyper-V and Azure Backup was limited to files and folders. Now if we go into the Azure portal we can see that they have updated the feature set in the backup vault


Now this points to a download which is called Azure backup which was released yesterday. This new feature allows for backup of on-premises from disk to cloud against Exchange, SQL, Sharepoint and Hyper-V yay!


During the setup we can see that this is a typical rebranded DPM setup, which has support for the most, but it does not include tape support and is most likely aimed at replacing DPM w/Tape and instead move to DPM w/Cloud tier instead.


As we can see the Azure backup wizard is basically DPM, it also includes SQL server 2014.


The wizard will also setup a integration with a backup vault using a vault credential which can be downloaded from the Azure website.


And voila! the end product. So instead of recreating the wheel Microsoft basically rebranded DPM as a Azure product, hence killing the system center DPM ? Time will show when an official blog comes up.


#azure, #data-protection-manager

Upcoming events and book releases

So it is going to be a busy couple of months ahead.. So this sums up what is happening on my part the next months.

28 – 30 October: At the annual Citrix User Group event in Norway, which is a crazy good conference, I will be speaking about using Office365 with Citrix and different integrations and thinks you need to think about there as well

October-ish: Something I will working for a while, now after I published my Implementing Netscaler VPX book early last year, I got contacted by my publisher earlier this year who wanted a second edition to add the stuff that people thought was missing plus that I wanted to update the content to V11.

Implementing Netscaler VPX second edition contains

  • V11 content
  • Implementing on Azure, Amazon
  • Front-end optimization
  • AAA module
  • More stuff on troubleshooting and Insight
  • More stuff on TCP optimization, HTTP/2 and SSL

+ Cant remember the rest, anyways the Amazon link is here

November-ish: Suprise! This is also something I have been working on for a while, but I cannot take all of the credit. I cant even take half of the credit since I only did about 40% of the work. Earlier this year I got approached by Packt to create another Netscaler book called Mastering Netscaler which was a new book which was supposed to do more of a deep-dive Netscaler book, after months of back and forth with another co-author the book didnt progress as I wanted to…. Luckily I got in touch with another community member which was interested and away we went, now the Mastering Netscaler book is more of a deep-dive book which will be released either in October/November I have nothing to link to yet, but as soon as it is done I wlll be publishing it here. But as I said I only did about 40% of the writing, most of the credit is due to Rick Roetenberg great job!

#azure, #netscaler

MVP award 2015, Azure!

Well it is that time of the year again, and MVP renewal for my part is 1th October. For the last two years I have been an MVP for ECM (Enterprise Client Management) but since much of my focus has been on Azure for the last 1,5 year I felt that is was time for a change. And today I got the mail I have been waiting for

Microsoft MVP Banner
Dear Marius Sandbu,
Congratulations! We are pleased to present you with the 2015 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Microsoft Azure technical communities during the past year.

So truly honored to become a part of the Azure MVP team, looking forward the future!

#azure, #mvp-award

What is Microsoft Azure IaaS missing at this point?

Well, this might be a bit of a misleading blogpost, and it is not aimed at critizing Azure, but mearly a post which aims to look at what I feel that Microsoft IaaS Azure is missing at this point. Now even thou Microsoft is doing alot of development work on Azure, much of it is focused on Azure AD (No wonder since they have like 18 billion auths each week) but still there is work to be done on the IaaS part.

With the late introduction of  Azure Resource Manager

Azure DNS

Introduction to Containers on Azure

Storage Premium and such

So what is missing ?

  • Live Migration of Virtual Machines when doing Maintance on hosts:  The concept of setting up Availability Set (meaning setting up 2x of each virtual machine role is not very sexy when trying to persuade SMB customers to move their servers out to Azure) and In some cases, like RDS session hosts which are statefull which might be a bit pain if one host suddenly reboots
  • 99,9% SLA on Single Virtual Machine instances (Again reference to point 1) While this used to be an option, it was quietly removed during 2013…. While some of the competition has SLA for running single virtual machine instances/roles, Microsoft does not. Or maybe have a customizable maintance window.
  • Better integration of On-premises management, While VMM now does have an option to integrate with Azure it is missing some feature to make it better such as deployment from Azure 
  • Scratch the old portal and be done with the new one! Today some features are only available in the old portal such as Azure AD, while some features are only available in the new portal. This is just confusing. I suggest that you get done porting the old feature into the new one and then start creating new features / capabilities in the new portal.
  • Better use of Compute ( For instance being able to customize virtual machines sizes, while I know that having pre-defined size gives better resource planning, but in some cases customers might need just a 2vCPU and 8 GB ram and paying that small extra for 4 vCPUs (while it is not needed) should not be necessery.
  • Less limitations on Network capabilities, while it has improved there are still some limitations which in fact limit network appliances on Azure (such as Netscaler which can only operate with 1 vNIC in Azure) yes I know that having multiple vNICs is supported but it is randon which does not work very well with network appliances) Same with the ability to set Static MAC adresses, this is because a lot of network appliances using MAC based licensing
  • Central management of backup (While Backup Vault contains alot of information and some of the capabilities in still in Preview, I would love to have a single view which shows all backup jobs, also give the Azure Backup some capabilities to jump onto Exchange, SQL and Hyper-V) and also include support for DS-series!
  • Iaas V2 VMs also are quite the improvement and moving away from use of cloud Services, there are alot of limiations here towards the other Azure services. Such as that it does not support the Azure Backup Service, and that there are no plans to give a migration option from V1 to V2 VMs.
  • Azure DNS give it a web-interface! while PowerShell is fun and makes things alot easier, sometimes I like to look at DNS zones from a GUI
  • Support for BGP on VPN Gateways (Which allow for failover between different VPN tunnels, same goes for providing suppor for multiple-site Static VPN connections.
  • IPv6 support!
  • Support for Gen2 and VHDX format. Now Microsoft is pushing Generation 2 virtual machines and the new VHDX format, Azure should support this as well. This would make things alot easier in a hybrid scenario and make it alot easier moving resources back and forth
  • Azure RemoteApp while it is a simple of good product there are some things I miss, such as full desktop access (most of our customers want to have full desktop access) and remove the limitation of 20 users minimum, this is a huge deal breaker for SMB customers in this region.
  • Console Access to virtual machines (In some cases while RDP might not be available for some reason, we should have an option to get into the console of the virtual machine)

Now what is the solution to getting all this added to Azure? us of course!

The best way to get Microsoft’s attention to add new features and capabilities into Azure is by either posting feedback on this site or by voting up already existing posts–general-feedback

Much of the newly added capabilities, originates from this forum.

#azure, #gameworn, #goat, #microsoft, #mj, #pe, #wizarddays

Virtual Machine backup in Azure using Veeam Endpoint

A while back I blogged about Veeam Endpoint while it is aimed at Physical computers / servers it has another purpose that I just discovered.

In Azure, Microsoft has currently a preview feature called Azure VM backup, which in essence is a image based backup of virtual machines in Azure. Now since this currently has alot of limitations I figured what other options do we have?

While some people do Windows Server Backup directly to another Azure VM disk, I figured why not give Veeam a try with Data disk and use it in conjunction with SMB files. The reason why is that we can use Veeam Endpoint do to backup to an data disk (which is attached to an individual VM) then create a task to move the backup to an SMB files store (in case the virtual machines crashes or is unavailable we have the backup on an SMB file share and that makes it accessable for all other virtual machines within that storage account. NOTE: Doing Veeam backup directly to SMB file shares is not working

So we create a virtual machine in Azure and then use the portal to attach an empty data disk for the virtual machine


This new disk is going to be the repostiory for Veeam Endpoint within the VM

SMB files is a SMB like feature which is currently in preview and is available for each storage account. In order to use it we must first create a SMB file share using PowerShell

$FSContext=New-AzureStorageContext storageaccount storageaccountkey

$FS = New-AzureStorageShare sampleshare -Context $FSContext

New-AzureStorageDirectory -Share $FS -Path sampledir

After we have created the fileshare we need to add the network path to the virtual machine inside Azure. First we shold use CMDkey to add the username and password to the SMB file share to that it can reconnect after reboot

cmdkey /add: /user:useraccount /pass:<storage Access Key>

And then use net use z: \\\sampleshare


After the network drive is mapped up, we can install Veeam Endpoint.


Now Veeam Endpoint is a free backup solution, it can integrate with existing Veeam infrastructure such as repositories for more centralized backup solution. It also has some limitations regarding application-aware processing but works well with tradisional VMs in Azure.

After setup is complete we can setup our backup schedule




Then I run the backup job. Make sure that the backup job is run correnctly, not that as best-pratice is not to store any appliaction or such on C:\ drive, I also got VSS error messages while backing up data on c:\ therefore you should have another data disk where you store applications and files if neccessery.

Now after the backup is complete we have our backup files on a data disk that is attached to a virtual machine. We have two options here in case we need to restore data on another virtual machine.

1: We can run the restore wizard from the backup files on another virtual machine against the copied backup files on the SMB file share


2: Deattach and reattach the virtual disk to another virtual machine.
this is cumbersome to do if we have multiple virtual harddrives


Now the attaching a virtual disk is done on the fly, when we run the restore wizard from Veeam, the wizard will automatically detect the backup volume and give us the list of restore points available on the drive


Note that while running the file recovery wizard does not give us an option to restore back directly to the same volume, so we can only copy data out from a backup file.


Well there you have it, using Veeam endpoint protection for virtual machine in Azure against a data drive. After given it a couple of test runs I can tell its working as intended and gives alot better functionality over the built-in windows server backup. If you want to you can also set it up with Veeam FastSCP for Azure and allowing it to download files from Azure VMs down to an on-premises setup.

#azure, #veeam