What is Microsoft doing with RDS and GPU in 2016? and what are VMware and Citrix doing?

So it was initially labed Server 2016, for then I forgot an important part of it, which ill come back to later.

This year, Microsoft is most likely releasing Windows Server 2016 and with it a huge number of new features like Containers, Nano, SDN and so on.

But what about RDS? Well Microsoft is actually doing a bunch there,

  • RemoteFX vGPU support for GEN2 virtual machines
  • RemoteFX vGPU support for RDS server
  • RemoteFX vGPU with OpenGL support
  • Persional Session Desktops (Allows for an RSDH host per user)
  • AVC 444 mode (http://bit.ly/1SCRnIL)
  • Enhancements to RDP 10 protocol (Less bandwidth consuming)
  • Clientless experience (HTML 5 support is now in tech preview for Azure RemoteApp) which will also most likely be ported for on-premises solutions as well)
  • Discrete Device Assigment (Which in essence will be GPU-passtrough) http://bit.ly/1SULnLD

So there is all these stuff happening in terms of GPU enhancements and performance increase of the protocol and of course delivering hardware offloading uses the encoder.

Another important piece is the support for Azure which is coming with the N-series, which is DDA (GPU-passtrough) in Azure which will allow us to setup a virtual machine with dedicated GPU graphics running for a per hour price when we need it! and also in some cases can be configured for an RDMA backbone where we have need for high compute capacity for deep-learning. This N-series will be powered by NVDIA and K80 & M60.

So is still RDS the way to go in terms of full-scale deployment ? Can be, RDS has gotten from a dark place to become a good enough solution (even thou it has its limitations) and the protocol itself has gotten alot better (even do I miss alot of tuning capabilities for the protocol itself..

Now VMware and Citrix are also doing their things, with a lot of heavy-hitting being done at both sides, but also this again gives ut alot of new feature since both companies are investing alot in their EUC stack.

The interesting part is that Citrix is not putting all their eggs in the same basket, with now adding support for Azure as well (Which already includes support for ESXi, Amazon, Hyper-V and so on), meaning that when Microsoft releases the N-series as well, Citrix can easily integrate to the N-series to deliver the GPU using their own stack which has alot of advantages over RDS. Horizon with GPU usage is limited to running on ESXi.

VMware on the other hand is focusing on a deep partnership with Nvidia and also moving ahead with Horizon Air Hybrid (which will be a kinda Citrix Workspace Cloud setup) and also VMware is doing ALOT on their Stack

  • AppVolumes
  • JIT desktops
  • User Enviroment Manager

Now 2016 is going to be an interesting year to see how these companies are going to evolve and how they are going to drive the partners moving forward.

#azure, #citrix, #hyper-v, #microsoft, #nvidia, #vmware

Storage Wars–HCI edition

Permalink til innebygd bilde

There is alot of fuzz these days around hyperconverged, software defined storage etc.. especially since VMware announced VSAN 6.2 earlier this week, that trigge alot of good old brawling on social media. Since VMware was clearly stating that they are the marked leader in the HCI marked, if that is true or not I don’t know. So therefore I decided to write this post, just to clear up the confusion on what HCI actually is and what the different vendors are delivering in terms of features and how their  architecture differentiates. Just hopefully someone is as confused as I was in the beginning..

Now after a while now I’ve been working for this quite some time now, so in this post I have decided to focus on 4 different vendors in terms of features and what their architecture looks like.

  • VMware
  • Nutanix
  • Microsoft

PS: Things changes, features get updated, if something is wrong or missing let me know!

The term hyper-converged actually comes from the term converged infrastructure, where vendors started to provide a pre-configured bundle of software and hardware into a single chassis. This was to try minimize compability issues that we would have within the traditional way we did infrastructure, and of course make it easy to setup a new fabric.  So within hyperconverged we integrate these components even further so that they cannot be broken down into seperate components. So by using software-defined storage it allows us to deliver high-performance, highly available storage capacity to our infrastructure without the need of particular/special hardware. So instead of having the traditional three-tier architecture, which was the common case in the converged systems. We have servers where we combine the compute and storage, then we have software on the top which aggreagates the storage between mulitple nodes to create a cluster.

So in conclusion of this part, you cannot get hyperconverged without using some sort of software-defined storage solution.

Now back to the vendors. We have of course Microsoft and VMware which are still doing a tug o war with their relases, but their software-defined storage option has one thing in common. It is in the kernel. Now VMware was the first of the two to release a fully hyperconverged solution and as of today they released version 6.2 which added alot of new features. On the other hand Microsoft is playing this safe, and with Windows Server 2016 they are releasing a new version of Storage Spaces which now has an hyperconverged deployment option. Now belive it or not, Microsoft has had alot of success with the Storage Spaces feature, since it has been a pretty cheap setup and included that with some large needed improvements to the SMB protocol. So therefore let us focus on how VSAN 6.2 and Windows Server 2016 Storage Spaces Direct which both have “in-kernel” ways of delivery HCI.

VMware VSAN 6.2

Deployment types: Hybrid (Using SSD and spinning disks) or All-flash
Protocol support: Uses its own proprietary protocol within the cluster
License required: License either Hybrid or All-Flash
Supported workloads: Virtual machine storage
Hypervisor support: ESXi
Minimum nodes in a cluster: 2 (With a third node as witness)https://blogs.vmware.com/virtualblocks/2015/09/11/vmware-virtual-san-robo-edition/)
Hardware supported: VSAN Ready Nodes, EVO:RAIL and Build Your Own based on the HCL –> http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan
Disk requirements: Atleast one SSD and one HDD
Deduplication support: Yes, starting from 6.2 near-line (only within an all flash array only
Compression support: Yes, starting from 6.2 near-line (only within an all flash array only)
Resilliency factor: Resiliency,  Fault Tolerance Method (FTM) Raid-1 Mirroring. Raid 5/6 are in the 6.2 release
Disk scrubbing: Yes, as of 6.2 release.
Storage QoS: Yes, as of 6.2 release. (Based upon a 32KB block size ratio) can be attached to virtual machines or datastores.
Read Cache: 0,4% of Host memory is used for read cache, where the VMs are located.
Data Locality: Sort of, it does not do client-side local read cache.
Network infrastructure needed: 1Gb or 10Gb ethernet network. (10GB only for all-flash) multicast enabled
Maximum number of nodes: 64 nodes pr cluster

Things that are important to remember is that VMware VSAN stores data within an object. So for instance if we are to create a virtual machine on a Virtual SAN datastore, VSAN would create an object for each virtual disk, snapshot and so on. It also creates a container object that stores all the metadata files of the virtual machine. So the availability factor can be configured pr object.  These objects are stored on one or multiple magnetic disks and hosts, and VSAN can access these objects remotely both read and write wise. VSAN does not have the concept of a pure data locality model like others do, a machine can be running on one host but the objects be stored on another, this gives a consistent performance if we for instance were to migrate a virtual machine from one host to another. VSAN has the ability to read for multiple mirror copies at the same time to distribute the IO equally.

Also VSAN has the concept of stripe width, since in many cases we may need to stripe and object across multiple disks. the largest component size in VSAN is 255 GB, so if we have an VMDK which is 1 TB, VSAN needs to stripe that VDMK file out to 4 components. The maximum strip width is 12. The SSD within VSAN act as an read cache and for a write buffer.

image

Windows Server 2016 Storage Spaces Direct*

*Still only in Tech Preview 4

Deployment types: Hybrid (Using SSD and spnning disks) or All-flash
Protocol support: SMB 3
License required: Windows Server 2016 Datacenter
Supported workloads: Virtual machine storage, SQL database, General purpose fileserver support
Hypervisor support: Hyper-V
Hardware supported: Storage Spaces HCL (Not published yet for Windows Server 2016)
Deduplication support: Yes but still only limited support workloads (VDI etc)
Compression support: No
Minimum nodes in a cluster: 2 *And using some form a witness to maintain quorom)
Resilliency factor: Two way mirror, three way mirror and Dual Parity
Disk scrubbing: Yes, part of chkdisk
Storage QoS: Yes, can be attached to virtual machines or shares
Read Cache: CSV Read cache (Which is part of the RAM on the host) also depending on deployment type. In Hybrid mode, SSD is READ & WRITE cache, therefore SSD is not used for persistent storage.
Data Locality: No
Network infrastructure needed: RDMA enabled network adapters, including iWARP and RoCE
Maximum number of nodes: 12 nodes pr cluster as of TP4
You can read more about under the hood about Storage Spaces Direct here –> http://blogs.technet.com/b/clausjor/archive/2015/11/19/storage-spaces-direct-under-the-hood-with-the-software-storage-bus.aspx
Hardware info: http://blogs.technet.com/b/clausjor/archive/2015/11/23/hardware-options-for-evaluating-storage-spaces-direct-in-technical-preview-4.aspx

Important thing to remember here is that we have an CSV volume which is created on top of a SMB file share. Using Storage Spaces Direct, Microsoft leverages mulitple features of the SMB 3 protocol using SMB Direct and SMB Multichannel. Another thing to think about is that since there is no form for data locality here, Microsoft is dependant on using RDMA based technology to with low-overhead read and write data from another host in the network. With has much less overhead then TCP based networks. Unlike VMware, Microsoft uses extents to spread data across nodes these are by default on 1 GB each.

image

Now in terms of difference between these two, well first of its the way the manage reads and writes of their objects. VMware has a distributed read cache, while on the other hand Microsoft requires RDMA but allows to read/write with very low overhead and latency from different hosts. Microsoft does not have any virtual machine policies that define how resillient the virtual machine is, but this is placed on the share (which is virtual disk) which defines what type of redundacy level it is. Now there are still things that are still not documentet on the Storage Spaces Direct solution.

So let us take a closer look at Nutanix.

Nutanix

Deployment types: Hybrid (Using SSD and spinning disks) or All-flash
Protocol support: SMB 3, NFS, iSCSI
Editions: http://www.nutanix.com/products/software-editions/
Supported workloads: Virtual machine storage, general purpose file service* (Tech Preview n
Hypervisor support: ESX, Hyper-V, Acropolis (Cent OS KVM custom build)
Hardware supported: Nutanix uses Supermicro general purpose hardware for their own models, but they have an OEM deal with Dell and Lenovo
Deduplication support: Yes, both Inline and post clusted based
Compression support: Yes both Inline and post process
Resilliency factor: RF2, RF3 and Erasure-Coding
Storage QoS: No, equal share
Read Cache: Unified Cache (Consists of RAM and SSD from the CVM)
Data Locality: Yes, read and writes are aimed at running on the local host which the compute resources is running on.
Network infrastructure needed: 1Gb or 10Gb ethernet network. (10GB only for all-flash)
Maximum number of nodes: ?? (Not sure if there are any max numbers here.

The objects on Nutanix are broken down to vDisks, which are composed of multiple extents.

Source: http://nutanixbible.com/

Unlike Microsoft, Nutanix operates with an extent size of 1MB, and the IO path is in most cases locally on the physical host.

image

When a virtual machine running on a virtualization platform does and write operations it will write to a part of the SSD on the physical machine called the OpLog (Depending on the resilliency factor, OpLog will then replicate the data to other node Oplog to achive the replication factor that is defined in the cluster. Reads are served from the Unified Cache which consists of RAM and SSD from the Controller VM which it runs on. If the data is not available on the cache it can get it from the extent store, or from another node in the cluster.

Source: http://nutanixbible.com/

Now all three vendors all have different ways to achive this. In case of Vmware and Microsoft which both have their solution in-kernel, Microsoft focused on using RDMA based technology to allow for low latency, high bandwidth backbone (which might work in the advantage, when doing alot of reads from other hosts in the network (when the traffic is becoming in-balanced)

Now VMware and Nutanix on the otherhand only require regular ethernet 1/10 GB network. Now Nutanix uses data locality and with new hardware becoming faster and faster that might work in their advantage since the internal data buses on a host can then generate more & more troughput, the issues that might occur which VMware published in their VSAN whitepaper on why they didn’t create VSAN with data locality in mind is when doing alot of vMotion which would then require alot of data to be moved between the different hosts to maintain data locality again.

So what is the right call? don’t know, but boy these are fun times to be in IT!

NB: Thanks to Christian Mohn for some clarity on VSAN! (vNinja.net)

#microsoft, #nutanix, #storage-spaces-direct, #vmware, #vsan

What is Microsoft Azure IaaS missing at this point?

Well, this might be a bit of a misleading blogpost, and it is not aimed at critizing Azure, but mearly a post which aims to look at what I feel that Microsoft IaaS Azure is missing at this point. Now even thou Microsoft is doing alot of development work on Azure, much of it is focused on Azure AD (No wonder since they have like 18 billion auths each week) but still there is work to be done on the IaaS part.

With the late introduction of  Azure Resource Manager https://msandbu.wordpress.com/2015/05/22/getting-started-with-azure-resource-manager-and-visual-studio/

Azure DNS https://msandbu.wordpress.com/2015/05/08/taking-azure-dns-preview-for-a-spin/

Introduction to Containers on Azure http://azure.microsoft.com/en-us/blog/containers-docker-windows-and-trends/

Storage Premium and such https://msandbu.wordpress.com/2014/12/17/windows-azure-and-storage-performance/ https://msandbu.wordpress.com/2015/01/08/azure-g-series-released-and-tested/

So what is missing ?

  • Live Migration of Virtual Machines when doing Maintance on hosts:  The concept of setting up Availability Set (meaning setting up 2x of each virtual machine role is not very sexy when trying to persuade SMB customers to move their servers out to Azure) and In some cases, like RDS session hosts which are statefull which might be a bit pain if one host suddenly reboots
  • 99,9% SLA on Single Virtual Machine instances (Again reference to point 1) While this used to be an option, it was quietly removed during 2013…. While some of the competition has SLA for running single virtual machine instances/roles, Microsoft does not. Or maybe have a customizable maintance window.
  • Better integration of On-premises management, While VMM now does have an option to integrate with Azure it is missing some feature to make it better such as deployment from Azure https://technet.microsoft.com/en-us/library/mt125377.aspx 
  • Scratch the old portal and be done with the new one! Today some features are only available in the old portal such as Azure AD, while some features are only available in the new portal. This is just confusing. I suggest that you get done porting the old feature into the new one and then start creating new features / capabilities in the new portal.
  • Better use of Compute ( For instance being able to customize virtual machines sizes, while I know that having pre-defined size gives better resource planning, but in some cases customers might need just a 2vCPU and 8 GB ram and paying that small extra for 4 vCPUs (while it is not needed) should not be necessery.
  • Less limitations on Network capabilities, while it has improved there are still some limitations which in fact limit network appliances on Azure (such as Netscaler which can only operate with 1 vNIC in Azure) yes I know that having multiple vNICs is supported but it is randon which does not work very well with network appliances) Same with the ability to set Static MAC adresses, this is because a lot of network appliances using MAC based licensing
  • Central management of backup (While Backup Vault contains alot of information and some of the capabilities in still in Preview, I would love to have a single view which shows all backup jobs, also give the Azure Backup some capabilities to jump onto Exchange, SQL and Hyper-V) and also include support for DS-series!
  • Iaas V2 VMs also are quite the improvement and moving away from use of cloud Services, there are alot of limiations here towards the other Azure services. Such as that it does not support the Azure Backup Service, and that there are no plans to give a migration option from V1 to V2 VMs.
  • Azure DNS give it a web-interface! while PowerShell is fun and makes things alot easier, sometimes I like to look at DNS zones from a GUI
  • Support for BGP on VPN Gateways (Which allow for failover between different VPN tunnels, same goes for providing suppor for multiple-site Static VPN connections.
  • IPv6 support!
  • Support for Gen2 and VHDX format. Now Microsoft is pushing Generation 2 virtual machines and the new VHDX format, Azure should support this as well. This would make things alot easier in a hybrid scenario and make it alot easier moving resources back and forth
  • Azure RemoteApp while it is a simple of good product there are some things I miss, such as full desktop access (most of our customers want to have full desktop access) and remove the limitation of 20 users minimum, this is a huge deal breaker for SMB customers in this region.
  • Console Access to virtual machines (In some cases while RDP might not be available for some reason, we should have an option to get into the console of the virtual machine)

Now what is the solution to getting all this added to Azure? us of course!

The best way to get Microsoft’s attention to add new features and capabilities into Azure is by either posting feedback on this site or by voting up already existing posts http://feedback.azure.com/forums/34192–general-feedback

Much of the newly added capabilities, originates from this forum.

#azure, #microsoft

Wire Data in Operations Management Suite

Microsoft finally released a new solutions pack to Operations Management suite the other day, which I have been waiting for since Ignite! WireData!!!

This is an solution pack that gathers meta data about your network, it requires a local agent installed on your servers as with other solution packs but allows you to get more detailed information about network traffic happening inside your infrastructure.

So if you have OMS you just need to go into the solution pack and add the wire data pack

image

But note that after adding the solution pack It need a while to gather the necessery data to create a sort of baseline about the network traffic.

image

After it is done it groups together the communication that has happend on the agents to see what kind of protocols that are often in use

image

For instance I see that there is alot of Unknown traffic happening on my agent, I can do a drill down to see more info about that particular traffic. Then I can see in detail where the traffic is going

image

I can also do a drill down to se what kind of process is initiating the traffic going back and forth. Something I would like to see in this, is the ability to add custom info, lets say for instance if I have a particular application running which uses some custom ports and processes I would like to add a custom name to that application so It can appear in the logs and in the overview.

Other then that it provides some great insight in what kind of traffic is going back and forth inside the infrastrucutre, and Microsoft has added some great common queries.

image

#microsoft, #oms, #system-center

Setting up Microsoft Azure and Iaas Backup

Earlier today Microsoft announced the long awaited feature which allows us to take backup of virtual machines directly in Azure. Now before today Microsoft didn’t have any solution to do backup of a VM unless doing a blob snapshot or some third party solution. You can read more about it here –> http://azure.microsoft.com/blog/2015/03/26/azure-backup-announcing-support-for-backup-of-azure-iaas-vms/

The IaaS backup feature is part of Azure Vault, and is pretty easy to setup. Important to note that enabling the backup feature requires that Azure installs an guest-agent in the VM (So therefore they require to be online during the registration process) and note that this is PR region.

So now that when we create a backup vault we get the new preview features. (Firstly we can also create storage replication policies)

1

Now in order to setup a backup rutine we first need to setup a policy, which define when to take backup.

2

Next head on over to the dashboard, first the backup vault needs to detect which virtual machiens it can protect (so click Discover)

3

So it find the two virtual machines which are part of the same sub and in the same region.

4

NOTE: If one of your virtual machines are offline during the process the registration job fails (so don’t select VMs that are offline or just turn them on) Now after the item has been registrered to the vault I can see it under protected items in the backup vault

 

6

Now after this is setup I can see under jobs what VMs that are covered by the policy

7

So when I force start a backup job I can see the progress under the jobs pane

7

I can also click on the JOB and see what is happening.

9

So for this virtual machine which is a plain vanilla OS image took about 22 min, and doing a new backup 1 hour later took about the same amount of time, looks like there is not a incremental backup.

image

So when doing a restore I can choose from the different recovery points

image

And I can define where to restore a virtual machine to a new cloud service or back to its original VM

image

#azure-backup, #microsoft

Citrix XenMobile and Microsoft Cloud happily ever after ?

There is no denying that Microsoft is moving more and more focus into their cloud offerings, even with solution such as Office365, EMS (Enterprise Mobility Suite) and of course their Azure platform.

EMS being the latest product bundle in the suite gives customers Intune, Azure Rights Management and Azure Active Directory Premium. So if a customer already has Office365 (their users are already placed with Azure AD and can then easily be attached to EMS for more features)

We are also seeing that Microsoft is adding more and more management capabilities against Office365 into their Intune suite (Which is one of the keypoints which no other vendors have yet) but is this type of management something we need ? or is it just to give it a “key” selling point?

Now Microsoft has added alot of MDM capabilities to Intune, but they are nowhere close to the competition yet. Of course they have other offerings in the EMS pack, like Azure Rights Management, which are quite unique on the way it functions and integrates with Azure AD and Office365. As of 2014 Microsoft isn’t even listed on the Gartner quadrant for EMM (which they stated would be the goal for 2015)

But it will be interesting to se if Microsoft’s strategy is to compete head-to-head on the other vendors or if they wish to give the basic features and dvelve more into the part of Azure AD and identity management across clouds and SaaS offerings.

Citrix on the other hand, have their XenMobile offering which is a more complete EMM product suite (MDM and MAM, Follow me data with Sharefile, and so on) Now Citrix has a lot of advantages for instance over using Sharefile against OneDrive.  Sharefile has encryption of data even thou it is locally and running on a sandboxed application( on a mobile device), while the only option that OneDrive has is using as a part of Rights Management Service (of course OneDrive has extensive data encryption in-transit and at rest https://technet.microsoft.com/en-us/library/dn905447.aspx

Citrix also has MicroVPN functionality and secure browser access running VPN access using Netscaler, while Microsoft also has a secure browser application which is much more limited to restricting which URLs to open and what content can be viewed from that browser.

So from a customer side you need to ask yourself.

  • what kind of requirement does my buisness have?
  • Do I use Office365 or a regualr on-premise setup?
  • Do I need the advanced capabilities ?
  • How are my users actually working ?

Is there a best of both worlds using both of these technologies ?

While yes!

Now of course there are some feature that overlaps using Offic365 and EMS + XenMobile, but there are also some features which are important to be aware of.

* Citrix has Sharefile storage controller templates in Azure (Meaning that if a customer has an IaaS in Azure, they can setup a Sharefile connector in Azure and use that to publish files and content without using OneDrive)

* Citrix has a Sharefile connector to Office365 (Which allows users to use Sharefile almost as a file aggregrator for communicating between Office365 and their regular fileservers) which allows for secure editing directly from ShareFile.

* Citrix XenMobile has alot better MDM features for Windows Phone that Intune has at the moment.

* Azure AAD has a lot of built-in SSO access to many of Citrix web based applications (Sharefile, GTM, GTA and so on) since users are already in Azure AD premium it can be used to grant access to the different applications using SSO)

* Netscaler and SAML iDP (If we have an on-premise enterprise solution we can use the Netscaler to operate as an SAML identity provider against Office365 which allows for replacement for ADFS which is required for full SSO of on-premise AD users to Office365

* Office365 ProPlus with Lync is supported on XenApp/XD with Lync optimization pack (Note that this is not part of XenMobile but of Workspace suite)

* Netscaler and Azure MFA (We can use Azure MFA against Netscaler to publish web based applications with traffic optimization)

* Netscaler will also soon be available in Azure which allows for setting up a full Citrix infrastructure in Azure

But in the future I would be guessing that Microsoft is moving forward with the user collaboration part, it is going to become the heart of identity management with Azure AD directory and rights management, while Citrix on the other hand will focus more and enabling mobility using solutions like EMM ( MAM ) and follow me data aggregator and secure file access and devices. Citrix will also play an important part in hybrid setup using Netscaler with Cloud bridge and as an identity provider on-premise

#citrix, #ems, #intune, #microsoft, #office365, #xenmobile

Upcoming events and stuff

There’s alot happening lately and therefore there has been a bit quiet here on this blog. But to give a quick update on what’s happening!

In february I just recently got confirmation that I am presenting two session at NIC conference (Which is the largest IT event for IT-pros in scandinavia) (nicconf.com) Here I will be presenting 2 (maybe 3) sessions.

* Setting up and deploying Microsoft Azure RemoteApp
* Delivering high-end graphics using Citrix, Microsoft and VMware

One session will be primarly focused on Microsoft Azure RemoteApp where I will be showing how to setup RemoteApp in both Cloud and Hybrid and talk a little bit about what kind of use cases it has. The second session will focus on delivering high-end graphics and 3d applications using RemoteFX (using vNext Windows Server), HDX and PCoIP and talk and demo abit about how it works, pros and cons, VDI or RDS and endpoints so my main objective is to talk about how to deliver applications and desktops from cloud and on-premise…

And on the other end, I have just signed a contract with Packt Publishing to write another book on Netscaler, “Mastering Netscaler VPX” which will be kind of a follow up of my existing book http://www.amazon.co.uk/Implementing-Netscaler-Vpx-Marius-Sandbu/dp/178217267X/ref=sr_1_1?ie=UTF8&qid=1417546291&sr=8-1&keywords=netscaler

Which will focus more in depth of the different subjects and focused on 10.5 features as well.

I am also involved with a community project I started, which is a free eBook about Microsoft Azure IaaS where I have some very skilled norwegians with me to write this subject. Takes some time since Microsoft is always adding new content there which needs to be added to the eBook as well.

So alot is happening! more blogsposts coming around Azure and Cloudbridge.

#azure, #citrix, #microsoft, #netscaler, #vmware