Deep dive Framehawk (From a networking perspective)

Well Citrix has released Framehawk with support for both enterprise WLAN and remote access using Netscaler. In order to setup Framehawk for remote access you need to basically one thing (Enable DTLS) and of course SSL certificate rebound) DTLS is an TLS extenstion on UDP. So basically means that Framehawk is a UDP protocol. So unlike RemoteFX where Microsoft uses TCP/UDP both in a remote session, which means that it uses UDP for graphics and TCP for keystrokes and such.

So what does a Framehawk connection looks like?


External, a client uses DTLS connection to the Netscaler and then the Netscaler will use a UDP connection to the VDA in the backend. The DTLS connections has its own sequence number which is used to keeping track of connections.


There are some issues that you need be aware of before setting up Framehawk.image

Also some other notes which are important to take note of, and that Framehawk will not work properly in a VPN connection, since most VPN solutions will wrap packets inside a TCP layer or GRE tunnel which means that the UDP connection will not function as intended.


Now Framehawk is not designed for low bandwidth connections, it requires more bandwidth use then ThinWire so why is that ?

“For optimal performance, our initial recommendation is a base of 4 or 5 Mbps plus about 150 Kbps per concurrent user. But having said that, you will likely find that Framehawk greatly outperforms Thinwire on a 2 Mbps VSAT satellite connection because of the combination of packet loss and high latency.”

The reason for that is that TCP will try to retransmit packets which are dropped, while UDP which is a much simple protocol without connection setup delays, flow control, and retransmission. And in order to ensure that all mouseclick, keyboard clicks are successfully delivered Framehawk requires more bandwidth since UDP is stateless and there is no guarantee that packets are successfully deliver, I belive that the framehawk component of Citrix Receiver has its own “click” tracker which ensures that clicks are successfully delivered and to ensure that it requires more bandwidth.

New Azure backup “agent”

Today I was notified of a new Azure backup agent which was released on Azure and on the download center. As of recently Microsoft did not have support for backing up on-premises Sharepoint, SQL, Exchange, Hyper-V and Azure Backup was limited to files and folders. Now if we go into the Azure portal we can see that they have updated the feature set in the backup vault


Now this points to a download which is called Azure backup which was released yesterday. This new feature allows for backup of on-premises from disk to cloud against Exchange, SQL, Sharepoint and Hyper-V yay!


During the setup we can see that this is a typical rebranded DPM setup, which has support for the most, but it does not include tape support and is most likely aimed at replacing DPM w/Tape and instead move to DPM w/Cloud tier instead.


As we can see the Azure backup wizard is basically DPM, it also includes SQL server 2014.


The wizard will also setup a integration with a backup vault using a vault credential which can be downloaded from the Azure website.


And voila! the end product. So instead of recreating the wheel Microsoft basically rebranded DPM as a Azure product, hence killing the system center DPM ? Time will show when an official blog comes up.


Comparison Microsoft Storage Spaces Direct and Nutanix

There has been a lot of fuzz around storage spaces direct coming with Windows Server 2016, and I have been getting alot of questions around it lately. “Will it solve my storage issues?” “can we replace our existing SAN?” “When to choose SPD over SAN” and so on.

Now as of right now, not all the technical details are known around the feature itself and not all features are 100% in place but this blogpost will do a comparison between Nutanix and Storage Spaces Direct and how they differenciate. Now Storage Spaces direct is a more advances Storage Spaces setup, but uses the same capabilities but now we can agreegate local disks inside servers to setup a SMB 3.0 based fileservice.

This is an overview of how a Storage Spaces Direct setup might look like, since it has a requirement for 4 nodes and having a backbone RDMA, Im come back to why this is a requirement. Now as I have mentioned previously is that Storage Spaces direct has an issue and that is with data locality, Microsoft treats storage and compute as two seperate entities and that is reflected in the Storage Spaces Direct setup. Since it can be setup as two seperate components SMB Scale out file server or using hyperconvereged.

When setting up as Hyperconverged the following happens


Let us say that we have a VM01 running on NODE1 which is running on top of a storage spaces direct vdisk01 running as a two-way mirror. What will happen is that Storage Spaces will create 1 GB extent of the vDisk and spread the chucks across seperate nodes, so even thou the VM01 is running on a specific host, the storage is placed random on the different hosts within the cluster, which will indicate that this will generate alot of east-west traffic within the cluster, and that is why Microsoft has set a requirement that we have RDMA network backbone on our Storage Spaces Direct cluster since it will require low-latency – high troughput traffic in order to be efficient in this type of cluster setup, since Microsoft just looks at the different nodes as a bunch of disks.

On the other hand, Nutanix solves this in another matter, which I also think that Microsoft should think about which is data locality, in case of a VM running on a particular host, most of the content is served locally from the host that the VM is running on, using the different tieres (Content Cache), (Extent Store), (Oplog)


Which removes the requirement of any particular high speed backbone.

Upcoming events and book releases

So it is going to be a busy couple of months ahead.. So this sums up what is happening on my part the next months.

28 – 30 October: At the annual Citrix User Group event in Norway, which is a crazy good conference, I will be speaking about using Office365 with Citrix and different integrations and thinks you need to think about there as well

October-ish: Something I will working for a while, now after I published my Implementing Netscaler VPX book early last year, I got contacted by my publisher earlier this year who wanted a second edition to add the stuff that people thought was missing plus that I wanted to update the content to V11.

Implementing Netscaler VPX second edition contains

  • V11 content
  • Implementing on Azure, Amazon
  • Front-end optimization
  • AAA module
  • More stuff on troubleshooting and Insight
  • More stuff on TCP optimization, HTTP/2 and SSL

+ Cant remember the rest, anyways the Amazon link is here

November-ish: Suprise! This is also something I have been working on for a while, but I cannot take all of the credit. I cant even take half of the credit since I only did about 40% of the work. Earlier this year I got approached by Packt to create another Netscaler book called Mastering Netscaler which was a new book which was supposed to do more of a deep-dive Netscaler book, after months of back and forth with another co-author the book didnt progress as I wanted to…. Luckily I got in touch with another community member which was interested and away we went, now the Mastering Netscaler book is more of a deep-dive book which will be released either in October/November I have nothing to link to yet, but as soon as it is done I wlll be publishing it here. But as I said I only did about 40% of the writing, most of the credit is due to Rick Roetenberg great job!

Intune application management policies and multi-identity

I just published two videos (pretty short ones) to show the Intune capabilities on application management policies and applications which support multiple identities such as OneDrive, where one policy can apply for corporate accounts but not for personal accounts. The videos also show managed browser capabilities and that data which is viewed or opened within the browser can only be opened within the managed applications such as Intune PDF viewer, and that data which is viewed within cannot be copied or shared with other applications.

(OneDrive and multiple identities)

(OneDrive and managed browsers)

MVP award 2015, Azure!

Well it is that time of the year again, and MVP renewal for my part is 1th October. For the last two years I have been an MVP for ECM (Enterprise Client Management) but since much of my focus has been on Azure for the last 1,5 year I felt that is was time for a change. And today I got the mail I have been waiting for

Microsoft MVP Banner
Dear Marius Sandbu,
Congratulations! We are pleased to present you with the 2015 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Microsoft Azure technical communities during the past year.

So truly honored to become a part of the Azure MVP team, looking forward the future!

Moving forward with Nvidia GRID on Microsoft Azure

At AzureCon, Microsoft announced that they were partnering with Nvidia in order to deliver their GRID 2.0 architetucture on Azure. This will allow customers to easily access heavy GPU power within Microsoft Azure, and this is then most likely like other virtual machines in Azure be available at pr minute cost.

This NVIDIA grid architecture will be available for both Linux and Windows virtual machines and be available in a custom machine series called N-series

Permalink til innebygd bilde

Microsoft is uses DDA (Pass-trough) which is a feature that Microsoft does not have available within their Windows Server version with RemoteFX. My guess is the N1 & N10 series is basically using RemoteFX and splitting the memory into 2 slots.


So does this mean that Microsoft is moving toward with GPU passtrough on regular Windows Server as well? Hope so!

Microsoft also mentioned that is being available on CLIENT OPERATING SYSTEM, does this mean VDI is coming on Azure?This is not available as of now, but will be coming in preview later this year.

So if you plan on delivering GPU capable terminal server based computing in Azure, you need to compentace for the latency and the support catapabilities of the remote display protocol. Hence you should look into Citrix and the latest achivements they have with Framehawk & HDX and that Netscaler is now available in Azure, go figure.