Windows Server 2016 and introducing AVC 444 mode

With the upcoming release, Microsoft has been busy with adding alot of new features to the RDS platform. Like RemoteFX vGPU for GEN2 and for Server OS + GPU Passtrough and Server VDI etc.. But out the blue came this https://blogs.msdn.microsoft.com/rds/2016/01/11/remote-desktop-protocol-rdp-10-avch-264-improvements-in-windows-10-and-windows-server-2016-technical-preview/

Also Rachel Berry at Nvidia also blogged about the limitations on H.264 and 4:2:0 –> https://virtuallyvisual.wordpress.com/2016/02/18/microsoft-rdp-end-client-h-264-444-hardware-decode-support-on-existing-decoders-420-elegant-and-cool/ anyhow I was curious just how does this affect my RDP performance?

The Setup was pretty simple, either I can use it with RemoteFX vGPU or a plain virtual machine running 2016 TP4. I also require a Windows 10 with build 1511 or higher.

There are basically two policies that we can define

image

Now Prefer AVC hardware encoding, if you are using RemoteFX vGPU, the policy should be set on the Hyper-V host and not the RDS server itself.

image

After these settings are enabled and active, next time you try to connect to an RDP session you will see it triggers an event in the events log under RDS Applications and Services Logs -> Microsoft -> Windows -> RemoteDesktopServices-RdpCoreTS
Just need to reconnect after setting the policy

image

Profile: 2048 (Yep we are good)

Now I did a couple of tests, which were pretty simple to see the bandwidth usage, (Also to test the differences between RDS 2012 and 2016) which were running the same setup in policies just one was enabled with AVC 444 and the other wasn’t

This was a simple test of video transfer, the 1 line is RDS 2012 R2 and the below is running 2016 TP4 (almost 20MB lower bandwidth usage) wow!

image

Now I ran the same test again and I got similar results for 2012 R2, but this time I was running with AVC 444 mode enabled, which my initial guess would state that it used a bit more bandwidth

image

I also did a couple of more tests, but the conclusion was almost the same. Use of AVC 444 in video uses about 10% more bandwidth. I will need to test more to see how much better the graphics actually are.

#avc-444-mode, #windows-server-2016

Windows Azure Stack–What about the infrastructure Story?

There is no denying that Microsoft Azure is a success story, from being the lame silverlight portal with limited capabilities that it was to become a global force to be reckoned with in the cloud marketspace.

Later today Microsoft is releasing their first tech preview of their Azure Stack. Which allow us to bring the power of Azure platform to our pwn datacenters. It brings the same consistent UI and feature set of Azure resource manager which allows us to use the same tools and resource we have used in Azure against our own local cloud.

This of course will allow large customers and hosting providers to deliver Azure platform from their own datacenter. The idea seems pretty good thou. But what is actually Azure Stack ? It only deliver half of the promise of a Cloud like infrastructure. So I would place Azure stack within the category of cloud management platform. Since it is giving us the framework and portal experience

Now when we eventually have this setup and configured, we are given some of the benefits of the cloud which are

  • Automation
  • Self-Service
  • A common framework and platform to work with

Now if we look at the picture above there are some important things we need to think about in terms of fitting within the cloud aspect which is the computer fabric / network fabric and storage fabric which is missing from the Microsoft story. Of course Microsoft is a software company, but they are moving forward with their CPS solution with Dell and moving a bit towards the hardware space, but no where close yet.

When I think about Azure I also think about the resources which are beneath, they are always available, non-silo based and can scale up and down as I need to. Now if we think about the way that Microsoft has built their own datacenters there are no SAN archietecture at all, just a bunch of single machines with local storage with using software as the way to connect all this storage and compute into a large pool of resources, which is the way it should be since the SAN architecture just cannot fit into a full cloud solution. This is also the way it should be for an on-premises solution. If we were to deploy a Azure CloudStack to deliver the benefits of a cloud solution, the infrastructure should reflect that. As of right now Microsoft cannot give a good enough storage/compute solution with Storage Spaces in 2012 R2 since there are limits to the scale, and points of failure which a public cloud does not have.

Now Nutanix are one of the few providers which deliver support for Hyper-V and SMB 3.0 and does not have any scale limits and have the same properties as a public cloud solution. It agreegates all storage on local drives within each node into a pool of storage and with redundancy in all layers including an REST API which can easily integrate into Azure Stack, I can easily see that as the best way to deliver an on-premises cloud solution and a killer-combination.

#azure, #azure-stack, #hci, #nutanix, #windows-server-2016

Network capabilities with Windows Server 2016

Now with the release of Windows Server 2016, to many have been caught up with the support for Docker, Nano server and storage spaces direct. To many are missing out on what is the big investment that Microsoft is doing in WS2016, namely the networking stack!

Which is also going to be a big part of when Microsoft also releases Azure Stack, since most of the Azure functionality in regards to networking is being ported to Windows Server 2016.

So what is actually new ? So far all we have are the TP3 bits. So this is what is included. Now most of these features are only available from PowerShell and are part of the Network Controller stack.

  • Software Load Balancer (SLB) and Network Address Translation (NAT). The north-south and east-west layer 4 load balancer and NAT enhances throughput by supporting Direct Server Return, with which the return network traffic can bypass the Load Balancing multiplexer.

  • Datacenter Firewall. This distributed firewall provides granular access control lists (ACLs), enabling you to apply firewall policies at the VM interface level or at the subnet level.

  • Gateways. You can use gateways for bridging traffic between virtual networks and non-virtualized networks; specifically, you can deploy site-to-site VPN gateways, forwarding gateways, and Generic Routing Encapsulation (GRE) gateways. In addition, M+N redundancy of gateways is supported.

  • Converged Network Interface Card (NIC). The converged NIC allows you to use a single network adapter for management, Remote Direct Memory Access (RDMA)-enabled storage, and tenant traffic. This reduces the capital expenditures that are associated with each server in your datacenter, because you need fewer network adapters to manage different types of traffic per server.

  • Packet Direct. Packet Direct provides a high network traffic throughput and low-latency packet processing infrastructure.

  • Switch Embedded Teaming (SET). SET is a NIC Teaming solution that is integrated in the Hyper-V Virtual Switch. SET allows the teaming of up to eight physical NICS into a single SET team, which improves availability and provides failover. In Windows Server 2016 Technical Preview, you can create SET teams that are restricted to the use of Server Message Block (SMB) and RDMA.

  • Network monitoring. With network monitoring, network devices that you specify can be discovered, and you can monitor device health and status.

  • Network Controller. Network Controller provides a scalable, centralized, programmable point of automation to manage, configure, monitor, and troubleshoot virtual and physical network infrastructure in your datacenter. For more information, see Network Controller.

  • Flexible encapsulation technologies. These technologies operate at the data plane, and support both Virtual Extensible LAN (VxLAN) and Network Virtualization Generic Routing Encapsulation (NVGRE). For more information, see GRE Tunneling in Windows Server Technical Preview.

  • Hyper-V Virtual Switch. The Hyper-V Virtual Switch runs on Hyper-V hosts, and allows you to create distributed switching and routing, and a policy enforcement layer that is aligned and compatible with Microsoft Azure.
    image
    Think that this will allow us to create L2 connections directly with vNetworks in Azure.

  • Standardized Protocols. Network Controller uses Representational State Transfer (REST) on its northbound interface with JavaScript Object Notation (JSON) payloads. The Network Controller southbound interface uses Open vSwitch Database Management Protocol (OVSDB).

Also with the current investment into OMI stack and with the support for PowerShell DSC we can easily extended the support to the physical network as well. Also that since the network controller uses JSON to do management we can see that we are going to be able to use Resource Manager capabilities that are used in Azure as well when Azure Stack becomes available.

#network-controller, #sdn, #windows-server-2016

Setting up Storage Spaces Direct on Windows Server 2016 TP3

This is a step-by-step guide on how to setup a minimal Storage Spaces direct cluster on virtual machines running on Vmware workstation. It is also meant to enlighten people abit about the functionality which Microsoft is coming with and what it is lacking at the moment.

Important thing to remember about Storage Spaces Direct it is Microsoft’s first step into a converged infrastructure. Since it allows us to setup servers using locally attached Storage and created a cluster on top. Kinda lika VSAN and Nutanix, but not quite there yet. On top of the cluster functionality it uses Storage Spaces to create a pool of different vDisks on top to store virtual machines. Storage spaces Direct is not at the same level as VSAN and Nutanix, but It can also be used for general file server usage.

Clustering_Calabria_Hyperconverged

This setup is running Vmware workstation 11 with 2 virtual machines for scale-out file server and 1 domain controller.
The two scale out file servers have attached 4 virtual harddrives and 2 NICs.

Important that the harddrives are SATA based

image

After setting up the virtual machines install file server and failover cluster manager

Install-WindowsFeature –Name File-Services, Failover-Clustering –IncludeManagementTools

Then just create a failover cluster using Failover cluster manager or using PowerShell

New-Cluster –Name hvcl –Node <hv02,hv01> –NoStorage

After the Cluster setup is complete we need to define that this is going to be a Storage Spaces Direct SAN

Enable-ClusterStorageSpacesDirect

image

Then do a validation test to make sure that the Storage Spaces direct cluster shold work as inteded Smilefjes

image

Now you might get a warning that the disks on both nodes have the same identifier In case you need to shut down on of the VMs and change SATA disk identifier

image

Then define cluster network usage

image

The Storage Spaces Replication network will be on Cluster Only usage. Now that we have the bunch of disks available we need to create a disk pool. This can either be done using Failover cluster or using Powershell

image

 

But either way you should disable Writebackcache on a Storage Spaces Direct cluster which can be done after the creating using set-storagepool –friendlyname “nameofpool” –writecachesizedefault 0

image

Now we can create a new virtual disk, then configure settings like storage resilliency and such

image

Then we are done with the vDisk

image

Now when going to the virtual disk partion setup, make sure that you set it to ReFS

image

Now we can see that default values of the storage spaces direct vdisk

image

Now I can create a CSV volume of that vDisk

image

After we have created the CSV we need to add the Scale-out file server role as a clustered role

image

Next we need to add a file share to explose the SMB file share to external applications such as Hyper-V

image

image

image

And we are done!

We can now access the storage spaces direct cluster using the client access point we defined. Now during file transfer we can see which endpoint is being used for the reciving end. in this case it is this host 192.168.0.30 which is getting a file transfer from 192.168.0.1 and will then replicate the data to 10.0.0.2 across the cluster network.

image

The SMB client uses DNS to do an initial request to the SMB file server. Then they agree upon the dialect to use in the process. (This is from Message Analyzer)

image

Now what it is missing ?

Data locality! I have not seen any indication that Hyper-V clusters running on top of Storage Spaces direct in a hyperconverged deployment have the ability to “place” or run virtual machines on the node that they are running on top on. This will create a fragementation of storage / compute which is not really good. Maybe this is going to be implemented in the final version of Windows Server 2016, but not the SMB protocol does not have any built-in mechinims that handles this. For instance Nutanix has the built-in since the Controller will see if the VM is running locally or not and will start replicating bits and bytes until the processing is running locally.

#nutanix, #storage-spaces-direct, #windows-server-2016, #yper-v

Implenenting Containers on Windows Server 2016 and running IIS

So since TP3 was released yesterday, I have been quite busy trying to implement Containers on top of a Hyper-V host. Now Microsoft has been kind as enough to give us a simple Contain Image which makes the first part pretty easy.

In order to deploy Container we need a container host. The easiest way to get startet is download a finished script from Microsoft, which we can run directly from a Hyper-V host to be able to get a container host VM

NOTE: That Containers do not require Hyper-V, but this

wget -uri http://aka.ms/newcontainerhost -OutFile New-ContainerHost.ps1

This will generate a PowerShell Script from the URL, when we run it we need to define a couple of things, first of is name of the VM and password for the built-in administrator account and doing so the script which in essence will do a couple of things.

1: Download a finished Sysprepped Container Host image from http://aka.ms/ContainerOsImage which is in essence
WindowsServer_en-us_TP3_Container_VHD

2: Enables the Container feature on the host-vm  (Part of the unattend process) is in the last part of the script contains a unattend section which is being process against the container host-vm

3: Boot the VM as a Contained-host and do PowerShell direct session after the VM is booted and finish the setup.

After that you have a running container host setup, and we can connect to the VM using Hyper-V manager

image

Not much to see yet. Important to remember that the image will create a built-in NAT switch on the Docker host, with a predefined subnet range

image

Where the docker host will take the first IP in the range. Now if we run Get-ContainerHost and Get-ContainerImage we should get that the VM is a Containerhost and that we have a WindowsServerCore Image available.

Now in order to create a Container we need to run the following command

$container = New-Container -Name «MyContainer» -ContainerImageName WindowsServerCore -SwitchName «Virtual Switch»

The name of the switch needs to be identical to the one added. Can be viewed using get-vmswitch

Reason why we store it in a variable is because we need to reference it later when using PowerShell direct.

I can use the command get-container to see that it has been created. Now I have to start the container using start-container –name “MyContainer”

I can now see that the container is running and is attached to the NAT vSwitch

image

Great! so what now ? Smilefjes

As I mentioned earlier we needed to store the container variable in order to use it later, well this is the time. Now we need to do a PowerShell direct session to the Container. If not we can always use the $container = get-container –name to store it against.

By using the command

Enter-PSSession -ContainerId $container.ContainerId –RunAsAdministrator

We can now enter a remote session against the Container. We can also see that the container ID is shown at the start of the prompt

image

Also verify that is has gotten an IP-address from the NAT Network

image

So now what ? Let’s start by installing IIS on the container, this can be done by using the command Install-windowsfeature –name Web-Server

After that is installed and that the W3 service is running

get-service –name W3SVC

image

Now that we have deployed an IIS service on the Container, we need to setup a Static NAT rule to open for port 80. In my case I have a lab which resides on 192.168.0.0/24 but the NAT switch is on 172.16.0.0.

NOTE: Another option we can do is to enable the builtin-administrator account so that way we can use RDP against the Container in the future (Make sure you add the proper NAT rules)

net user administrator /active:yes

So in order to add a static forwarding rule on the containerhost vm just use the command to specify ports and IP-addreses. Add-NetNatStaticMapping -NatName «ContainerNat» -Protocol TCP -ExternalIPAddress 0.0.0.0 -InternalIPAddress 172.16.0.3 -InternalPort 80 -ExternalPort 80

Next I just do a nasty firewall disable edit

set-netfirewallprofile domain,public,private –Enabled false

Then by running Get-NatStaticMapping on the ContainerHost I can see the rules I created. I also added som new rules for RDP purposes.

image

Now my Docker host, is setup with two IP addresses (One which is 172.16.0.1) and the other is 192.168.0.10 (Which when I connect to that IP the NAT rules will kick in and forward me to my IIS service running on the Container)

Now I can see that I have a NAT session active

image

And that IIS opens on the Container

image

Now that I have an IIS installed Container I can shutdown the VM and create a new containerimage of it.

stop-container –name “test2”

By using the command

$newimage = New-ContainerImage -ContainerName test2 -Publisher Demo -Name newimage -Version 1.0

So this has been a first introduction to Containers running on TP3. Note that many utilities do not work formally with Containers, such as sconfig which tries to list out network interfaces, but they are not presented within a Container so some settings are not available.

#containers, #tp3, #windows-server-2016

Live at Keynote Microsoft Ignite

even thou the Wireless Isnt completely reliable I will try to maintain the flow as much as I canm, even thou it might get published later. (I will have to be honest the Wifi is horrible) they havent planned properly, (Cisco based)…

The keynote hall opened around 8 AM, and on the stage Microsoft even had a in-house DJ playing @joeysnow

Now the keynote starts at 9AM, wiere there is expected alot of new stuff to be released. Some of the news will just be recap on what happend at @MSbuild and also just some stuff around.

Just got confirmed that there are 23.000 atendees present at MSIgnite and they are live streaming all of the sessions live! (The keynote hall has 15.000 seats)

First announcement from Satya:

Windows Update for Business. Whch he didnt say so much about. (Technet blog on it here — http://t.co/daQ6lLBng4 )

Office2016 new public preview http://blogs.office.com/2015/05/04/office-2016-public-preview-now-available/

Skyoe for Buisness broadcasting

Office Delve Organizational analytics.

Windows Server and System Center 2016 https://technet.microsoft.com/en-us/subscriptions/downloads/?FileId=63651&utm_source=dlvr.it&utm_medium=twitter 

Whats new in System Center Configuration Manager https://technet.microsoft.com/library/dn965439.aspx on-prem MDM YAY!

SQL Server 2016 (Preview later today), with streach it to Azure) http://blogs.technet.com/b/dataplatforminsider/archive/2015/05/04/sql-server-2016-public-preview-coming-this-summer.aspx

Azure Stack (A new release of Azure Pack) http://blogs.technet.com/b/server-cloud/archive/2015/05/04/microsoft-brings-azure-to-the-datacenter-for-the-next-generation-of-hybrid-cloud.aspx Public Preview coming this summer.

Operations Management Suite (One consitent IT control plane, in the same lines of Azure EMS) http://www.microsoft.com/en-us/server-cloud/operations-management-suite/default.aspx?WT.mc_id=Blog_ServerCloud_Announce_TTD

Advanced Threat Analytics (Microsoft entering the security field again, Which is going to integrated within AD to see authentication logs. (Guessing its going to be like Audit Collector Service in System Center) more info about the EMS part here — http://blogs.technet.com/b/enterprisemobility/archive/2015/05/04/ignite-microsofts-next-chapter-in-enterprise-mobility.aspx

 

Windows 10 and Device Guard which is a more and better integrated AppLocker.

Outlook MAM enabled, and Skype for buisness enabled MAM is coming in Q3

Data leackage in Windows 10 with integrated file encryption.

Document traching site for Azure RMS. Which gives us the ability to see who has opened specific documents.

Azure AD leaked credential rolling out over the next couple of weeks. With also the om-premise which I will be trying out later today.

Microsoft also announced Azure DNS http://azure.microsoft.com/en-in/services/dns/

So alot of stuff that was announced today looking forward to trying it out.

#advanced-threat-analytics, #office2016, #operations-managent-suite, #system-center-2016, #windows-server-2016