Azure Stack networking overview and use of BGP and VXLAN

Now after been dabling with Azure Stack for some time since the preview, there has been one thing that has been bugging me, which is the networking flow. Hence I decided to create an overview of the network topology and how things are connected and how traffic flow is.

Important to remember that Azure Stack is using much of the features in 2016 including SLB, BGP, VXLAN and so on.
Most of the management machines in the Azure Stack POC is placed in the vEthernet 1001 connection on the Hyper-V host, and is connected to the vSwitch CCI_External.
The mangement machines are located on 192.168.100.0/24 scopes.
Now with this updated chart, we can see that each tenant has its own /32 BGP route which is attached to the MuxVM which acts as an load balancer

image

When traffic goes from the clientIP it is encapsulated using VXLAN and then goes to the MuxVM (Using its public IP address) In my case its 192.168.233.21 (Which is part of the PAHostvNIC which is then routed to the MuxVM using VXLAN encapsulation (Which uses UDP) and then forwarded to the BGPVM and then out the NatVM and out to the world.

image

On the other hand we have NATVM and the CLIENTVM which are placed on the 192.168.200 scope. The 192.168.200.0/24 network can communicate via the BGPVM which has two-armed configuration. Which acts as the gateway between 192.168.100 network and the 192.168.200.0 network. Now the funny thig is that NATVM just acts like a gateway for the external network in, it has RRAS installed and since it is directly connected to both networks it allows access from externally. Now BGPVM also has RRAS installed, but we cannot see that using the RRAS console, we need to see it in PowerShell, and also BGPVM (as stated) has a BGP route setup to the MuxVM. The MuxVM acts as an load balancer for the BGVM using BGP to advertise the VIP to the router using a /32 route.

So for instance on the ClientVM if we open a connection to Portal.Azurestack.local (Which has an IP of 192.168.133.74) The traffic flow will go like this.

ClientVM –> NATVM –> BGPVM –> (BGP ROUTE PEER) –> MuxVM –> PortalVM

Now remember that the configuration of BGP and LB and the host is done by the network controller

SLB infrastructure
For a virtual switch to be compatible with SLB, you must use Hyper-V Virtual Switch Manager or Windows PowerShell commands to create the switch, and then we must have the Azure Virtual Filtering Platform (VFP) for the virtual switch enabled.

So for those that are looking into Windows Server 2016, Look into the networking stack of 2016 its bloody HUGE!

#azure, #azure-stack

What is Microsoft doing with RDS and GPU in 2016? and what are VMware and Citrix doing?

So it was initially labed Server 2016, for then I forgot an important part of it, which ill come back to later.

This year, Microsoft is most likely releasing Windows Server 2016 and with it a huge number of new features like Containers, Nano, SDN and so on.

But what about RDS? Well Microsoft is actually doing a bunch there,

  • RemoteFX vGPU support for GEN2 virtual machines
  • RemoteFX vGPU support for RDS server
  • RemoteFX vGPU with OpenGL support
  • Persional Session Desktops (Allows for an RSDH host per user)
  • AVC 444 mode (http://bit.ly/1SCRnIL)
  • Enhancements to RDP 10 protocol (Less bandwidth consuming)
  • Clientless experience (HTML 5 support is now in tech preview for Azure RemoteApp) which will also most likely be ported for on-premises solutions as well)
  • Discrete Device Assigment (Which in essence will be GPU-passtrough) http://bit.ly/1SULnLD

So there is all these stuff happening in terms of GPU enhancements and performance increase of the protocol and of course delivering hardware offloading uses the encoder.

Another important piece is the support for Azure which is coming with the N-series, which is DDA (GPU-passtrough) in Azure which will allow us to setup a virtual machine with dedicated GPU graphics running for a per hour price when we need it! and also in some cases can be configured for an RDMA backbone where we have need for high compute capacity for deep-learning. This N-series will be powered by NVDIA and K80 & M60.

So is still RDS the way to go in terms of full-scale deployment ? Can be, RDS has gotten from a dark place to become a good enough solution (even thou it has its limitations) and the protocol itself has gotten alot better (even do I miss alot of tuning capabilities for the protocol itself..

Now VMware and Citrix are also doing their things, with a lot of heavy-hitting being done at both sides, but also this again gives ut alot of new feature since both companies are investing alot in their EUC stack.

The interesting part is that Citrix is not putting all their eggs in the same basket, with now adding support for Azure as well (Which already includes support for ESXi, Amazon, Hyper-V and so on), meaning that when Microsoft releases the N-series as well, Citrix can easily integrate to the N-series to deliver the GPU using their own stack which has alot of advantages over RDS. Horizon with GPU usage is limited to running on ESXi.

VMware on the other hand is focusing on a deep partnership with Nvidia and also moving ahead with Horizon Air Hybrid (which will be a kinda Citrix Workspace Cloud setup) and also VMware is doing ALOT on their Stack

  • AppVolumes
  • JIT desktops
  • User Enviroment Manager

Now 2016 is going to be an interesting year to see how these companies are going to evolve and how they are going to drive the partners moving forward.

#azure, #citrix, #hyper-v, #microsoft, #nvidia, #vmware

Getting started with Web based server management tools in Azure

Yesterday, Microsoft released a public public of some tools that Jeffrey Snover showed of at Microsoft Ignite last year, which was in essence basically just Server manager from within the Azure portal.

This tools is aimed for its first release to manage Windows Server 2016 servers, it can manage both Azure virtual machines and machines on-prem. So some of its capabilities:View and change system configuration

  • View performance across various resources and manage processes and services
  • Manage devices attached to the server
  • View event logs
  • View the list of installed roles and features
  • Use a PowerShell console to manage and automate


Source: http://blogs.technet.com/b/nanoserver/archive/2016/02/09/server-management-tools-is-now-live.aspx

So what we do is that we deploy a Server Manager Gateway which we want to manage our virtual machines (Remember that the Server Gateway needs to have an internet connection)

NOTE: If you want to deploy the Gateway feature on 2012 server you need to have WMF 5 installed, which you can fetch here –> WMF 5.0: https://www.microsoft.com/en-us/download/details.aspx?id=48729

So when we want to deploy –> Go into Azure –> New –> Server Management Tools –> Marketplace image

then we need to define the machine we want to connect to (Internal addresses, IPv4, IPv6 and FQDN)
So for the first run we need to create a gateway as well. If we want to add multiple servers that we want to manage we need to run this wizard again but then choose an existing gateway, for instance.
image

After we have created the instance we need to download the Gateway binaries and install on our enviroment

image

Then run the download from within the enviroment. Also important that if we want to manage non-domain based machines we need to run some parameters to add trusted hosts and such, as an example

winrm set winrm/config/client @{ TrustedHosts=»10.0.0.5″ }

REG ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1

NETSH advfirewall firewall add rule name=»WinRM 5985″ protocol=TCP dir=in localport=5985 action=allow (If you want specify firewall rules)

After you have installed the firewall rules. We need to specify credentials

image

After that is done we can now manage the machine from within Azure.

image

#azure, #server-manager, #web-mased-server-management

Windows Azure Stack–What about the infrastructure Story?

There is no denying that Microsoft Azure is a success story, from being the lame silverlight portal with limited capabilities that it was to become a global force to be reckoned with in the cloud marketspace.

Later today Microsoft is releasing their first tech preview of their Azure Stack. Which allow us to bring the power of Azure platform to our pwn datacenters. It brings the same consistent UI and feature set of Azure resource manager which allows us to use the same tools and resource we have used in Azure against our own local cloud.

This of course will allow large customers and hosting providers to deliver Azure platform from their own datacenter. The idea seems pretty good thou. But what is actually Azure Stack ? It only deliver half of the promise of a Cloud like infrastructure. So I would place Azure stack within the category of cloud management platform. Since it is giving us the framework and portal experience

Now when we eventually have this setup and configured, we are given some of the benefits of the cloud which are

  • Automation
  • Self-Service
  • A common framework and platform to work with

Now if we look at the picture above there are some important things we need to think about in terms of fitting within the cloud aspect which is the computer fabric / network fabric and storage fabric which is missing from the Microsoft story. Of course Microsoft is a software company, but they are moving forward with their CPS solution with Dell and moving a bit towards the hardware space, but no where close yet.

When I think about Azure I also think about the resources which are beneath, they are always available, non-silo based and can scale up and down as I need to. Now if we think about the way that Microsoft has built their own datacenters there are no SAN archietecture at all, just a bunch of single machines with local storage with using software as the way to connect all this storage and compute into a large pool of resources, which is the way it should be since the SAN architecture just cannot fit into a full cloud solution. This is also the way it should be for an on-premises solution. If we were to deploy a Azure CloudStack to deliver the benefits of a cloud solution, the infrastructure should reflect that. As of right now Microsoft cannot give a good enough storage/compute solution with Storage Spaces in 2012 R2 since there are limits to the scale, and points of failure which a public cloud does not have.

Now Nutanix are one of the few providers which deliver support for Hyper-V and SMB 3.0 and does not have any scale limits and have the same properties as a public cloud solution. It agreegates all storage on local drives within each node into a pool of storage and with redundancy in all layers including an REST API which can easily integrate into Azure Stack, I can easily see that as the best way to deliver an on-premises cloud solution and a killer-combination.

#azure, #azure-stack, #hci, #nutanix, #windows-server-2016

Setting up XenDesktop 7.7 against Microsoft Azure

Starting of the new year with a long awaited feature on my part, setting up integration between XenDesktop and Microsoft Azure which is now a supported integration in 7.7 which was released now a week ago. This integration allow us to provision virtual machines directly from Studio. NOTE: Important to note however that XenDesktop as of now only supports V1 (Classic) virtual machines in Azure, so no Resource Groups yet, which might make it a bit confiusing for some but ill try to cover it as good as I can.

But a good thing with this is that we can either setup XenDesktop in a hybrid setting where we have the controller and studio running from our local infrastructure or that we are running everything in Azure which is also another setup.

Now after setting up XenDesktop 7.7 you have a new option when setting up a new connection now, you need to get publish information from Azure before continuing this wizard, that can be downloaded from https://manage.windowsazure.com/publishsettings

image

Important that when downloading a publish profile that the subcribtion contains a virtual network (Classic virtual networking) within the region we choose later in the wizard, or else you will not be able to continue the wizard.

This can be viewed/created from the new portal under the “classic” virtual network objects

image

Now after verifying the connection profile you will get an option of different regions available within the subscription.

image

After choosing a region the wizard will list out all available virtual networks within the region, and will by default choose a subnet which has valid IP-range setup.
NOTE: The other subnet is used for Site-to-site VPN and should not be chosed in the wizard.

image

This part just defines which virtual networks the provisioned machines are going to use. So after we are done with the wizard we can get started with the provisioning part. Now in order to use MCS to create a pool of virtual machines in Azure we need to create an master image first. This can be done by creating a virtual machine within Azure, installing the VDA, doing any optimization, installing applications and doing sysprep and shutting down the virtual machines. Then we need to run PowerShell to capture the image. The reason for this is that the portal does not support capturing images in a state called specialized.

NOTE: A simple way to upload the VDA agent to the master image virtual machine is by using for instance Veeam FASTSCP for Azure, which uses WinRM to communicate and be able to download and upload files to the virtual machine.

image

DONT INSTALL ANYTHING SQL related on the C: drive (Since it uses a read/write cache which might end up with a corrupt database, and don’t install anything on the D: drive since this is a temporary drive and will be purged during a restart.

A specialized VM Image is meant to be used as a “snapshot” to deploy a VM to a good known point in time, such as checkpointing a developer machine, before performing a task which may go wrong and render the virtual machine useless.  It is not meant to be a mechanism to clone multiple identical virtual machines in the same virtual network due to the Windows requirement of Sysprep for image replication.

image

ImageName = the image name after the convertion

Name = virtual machine name

ServiceName = Cloud service name

Also important that the vmimage HAS NOT other data disks attached to it as well. After the command is done you can view the image within the Azure Portal and you can see that is has the property specialized

image

Also with this you also now have a master image which you just need to allocate and start when the need for a new update to the master image is needed.

image

So now that the image is in place, we can start to create a machine catalog. When creating a catalog, Studio will try to get all specialized images from the region that we selected

image

Then we can define what kind of virtual machines that we can create.

image

NOTE: Citrix supports a max of 40 virtual machines as of now)

Basic: Has a limit of 300 IOPS pr disk

Standard: Has a limit of 500 IOPS pr disk, newer CPU.

We can also define multiple NIC to the virtual machines, if we have any and select what kind of virtual network it should be attached to. Note that the wizard also defines computer accouts in Active Directory like regular MCS setup, so in order to do that we need to have either a S2S VPN setup so the virtual machines can contact AD or that we have a full Azure setup( site to site setup here –> https://azure.microsoft.com/en-us/documentation/articles/vpn-gateway-site-to-site-create/)  After that we can finish the wizard and Studio will start to provision the virtual machines.

NOTE: This takes time!

image

Eventually when the image is finished creating the virtual machine you will be able to access the virtual machines from a IP from within the Azure region. Stay tuned for a blogpost, involving setting up Azure and Netscaler integration with 7.7

#azure, #citrix, #microsoft-azure, #xendesktop, #xendesktop-7-7

New Azure backup “agent”

Today I was notified of a new Azure backup agent which was released on Azure and on the download center. As of recently Microsoft did not have support for backing up on-premises Sharepoint, SQL, Exchange, Hyper-V and Azure Backup was limited to files and folders. Now if we go into the Azure portal we can see that they have updated the feature set in the backup vault

image

Now this points to a download which is called Azure backup which was released yesterday. This new feature allows for backup of on-premises from disk to cloud against Exchange, SQL, Sharepoint and Hyper-V yay!

image

During the setup we can see that this is a typical rebranded DPM setup, which has support for the most, but it does not include tape support and is most likely aimed at replacing DPM w/Tape and instead move to DPM w/Cloud tier instead.

image

As we can see the Azure backup wizard is basically DPM, it also includes SQL server 2014.

image

The wizard will also setup a integration with a backup vault using a vault credential which can be downloaded from the Azure website.

image

And voila! the end product. So instead of recreating the wheel Microsoft basically rebranded DPM as a Azure product, hence killing the system center DPM ? Time will show when an official blog comes up.

image

#azure, #data-protection-manager

Upcoming events and book releases

So it is going to be a busy couple of months ahead.. So this sums up what is happening on my part the next months.

28 – 30 October: At the annual Citrix User Group event in Norway, which is a crazy good conference, I will be speaking about using Office365 with Citrix and different integrations and thinks you need to think about there as well http://cugtech.no/?page_id=1031

October-ish: Something I will working for a while, now after I published my Implementing Netscaler VPX book early last year, I got contacted by my publisher earlier this year who wanted a second edition to add the stuff that people thought was missing plus that I wanted to update the content to V11.

Implementing Netscaler VPX second edition contains

  • V11 content
  • Implementing on Azure, Amazon
  • Front-end optimization
  • AAA module
  • More stuff on troubleshooting and Insight
  • More stuff on TCP optimization, HTTP/2 and SSL

+ Cant remember the rest, anyways the Amazon link is here  http://www.amazon.co.uk/Implementing-NetScaler-VPX-TM-Second/dp/1785288989/ref=sr_1_3?ie=UTF8&qid=1442860517&sr=8-3&keywords=netscaler

November-ish: Suprise! This is also something I have been working on for a while, but I cannot take all of the credit. I cant even take half of the credit since I only did about 40% of the work. Earlier this year I got approached by Packt to create another Netscaler book called Mastering Netscaler which was a new book which was supposed to do more of a deep-dive Netscaler book, after months of back and forth with another co-author the book didnt progress as I wanted to…. Luckily I got in touch with another community member which was interested and away we went, now the Mastering Netscaler book is more of a deep-dive book which will be released either in October/November I have nothing to link to yet, but as soon as it is done I wlll be publishing it here. But as I said I only did about 40% of the writing, most of the credit is due to Rick Roetenberg https://twitter.com/rroetenberg great job!

#azure, #netscaler