New Azure backup “agent”

Today I was notified of a new Azure backup agent which was released on Azure and on the download center. As of recently Microsoft did not have support for backing up on-premises Sharepoint, SQL, Exchange, Hyper-V and Azure Backup was limited to files and folders. Now if we go into the Azure portal we can see that they have updated the feature set in the backup vault


Now this points to a download which is called Azure backup which was released yesterday. This new feature allows for backup of on-premises from disk to cloud against Exchange, SQL, Sharepoint and Hyper-V yay!


During the setup we can see that this is a typical rebranded DPM setup, which has support for the most, but it does not include tape support and is most likely aimed at replacing DPM w/Tape and instead move to DPM w/Cloud tier instead.


As we can see the Azure backup wizard is basically DPM, it also includes SQL server 2014.


The wizard will also setup a integration with a backup vault using a vault credential which can be downloaded from the Azure website.


And voila! the end product. So instead of recreating the wheel Microsoft basically rebranded DPM as a Azure product, hence killing the system center DPM ? Time will show when an official blog comes up.


Comparison Microsoft Storage Spaces Direct and Nutanix

There has been a lot of fuzz around storage spaces direct coming with Windows Server 2016, and I have been getting alot of questions around it lately. “Will it solve my storage issues?” “can we replace our existing SAN?” “When to choose SPD over SAN” and so on.

Now as of right now, not all the technical details are known around the feature itself and not all features are 100% in place but this blogpost will do a comparison between Nutanix and Storage Spaces Direct and how they differenciate. Now Storage Spaces direct is a more advances Storage Spaces setup, but uses the same capabilities but now we can agreegate local disks inside servers to setup a SMB 3.0 based fileservice.

This is an overview of how a Storage Spaces Direct setup might look like, since it has a requirement for 4 nodes and having a backbone RDMA, Im come back to why this is a requirement. Now as I have mentioned previously is that Storage Spaces direct has an issue and that is with data locality, Microsoft treats storage and compute as two seperate entities and that is reflected in the Storage Spaces Direct setup. Since it can be setup as two seperate components SMB Scale out file server or using hyperconvereged.

When setting up as Hyperconverged the following happens


Let us say that we have a VM01 running on NODE1 which is running on top of a storage spaces direct vdisk01 running as a two-way mirror. What will happen is that Storage Spaces will create 1 GB extent of the vDisk and spread the chucks across seperate nodes, so even thou the VM01 is running on a specific host, the storage is placed random on the different hosts within the cluster, which will indicate that this will generate alot of east-west traffic within the cluster, and that is why Microsoft has set a requirement that we have RDMA network backbone on our Storage Spaces Direct cluster since it will require low-latency – high troughput traffic in order to be efficient in this type of cluster setup, since Microsoft just looks at the different nodes as a bunch of disks.

On the other hand, Nutanix solves this in another matter, which I also think that Microsoft should think about which is data locality, in case of a VM running on a particular host, most of the content is served locally from the host that the VM is running on, using the different tieres (Content Cache), (Extent Store), (Oplog)


Which removes the requirement of any particular high speed backbone.

Upcoming events and book releases

So it is going to be a busy couple of months ahead.. So this sums up what is happening on my part the next months.

28 – 30 October: At the annual Citrix User Group event in Norway, which is a crazy good conference, I will be speaking about using Office365 with Citrix and different integrations and thinks you need to think about there as well

October-ish: Something I will working for a while, now after I published my Implementing Netscaler VPX book early last year, I got contacted by my publisher earlier this year who wanted a second edition to add the stuff that people thought was missing plus that I wanted to update the content to V11.

Implementing Netscaler VPX second edition contains

  • V11 content
  • Implementing on Azure, Amazon
  • Front-end optimization
  • AAA module
  • More stuff on troubleshooting and Insight
  • More stuff on TCP optimization, HTTP/2 and SSL

+ Cant remember the rest, anyways the Amazon link is here

November-ish: Suprise! This is also something I have been working on for a while, but I cannot take all of the credit. I cant even take half of the credit since I only did about 40% of the work. Earlier this year I got approached by Packt to create another Netscaler book called Mastering Netscaler which was a new book which was supposed to do more of a deep-dive Netscaler book, after months of back and forth with another co-author the book didnt progress as I wanted to…. Luckily I got in touch with another community member which was interested and away we went, now the Mastering Netscaler book is more of a deep-dive book which will be released either in October/November I have nothing to link to yet, but as soon as it is done I wlll be publishing it here. But as I said I only did about 40% of the writing, most of the credit is due to Rick Roetenberg great job!

Intune application management policies and multi-identity

I just published two videos (pretty short ones) to show the Intune capabilities on application management policies and applications which support multiple identities such as OneDrive, where one policy can apply for corporate accounts but not for personal accounts. The videos also show managed browser capabilities and that data which is viewed or opened within the browser can only be opened within the managed applications such as Intune PDF viewer, and that data which is viewed within cannot be copied or shared with other applications.

(OneDrive and multiple identities)

(OneDrive and managed browsers)

MVP award 2015, Azure!

Well it is that time of the year again, and MVP renewal for my part is 1th October. For the last two years I have been an MVP for ECM (Enterprise Client Management) but since much of my focus has been on Azure for the last 1,5 year I felt that is was time for a change. And today I got the mail I have been waiting for

Microsoft MVP Banner
Dear Marius Sandbu,
Congratulations! We are pleased to present you with the 2015 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Microsoft Azure technical communities during the past year.

So truly honored to become a part of the Azure MVP team, looking forward the future!

Moving forward with Nvidia GRID on Microsoft Azure

At AzureCon, Microsoft announced that they were partnering with Nvidia in order to deliver their GRID 2.0 architetucture on Azure. This will allow customers to easily access heavy GPU power within Microsoft Azure, and this is then most likely like other virtual machines in Azure be available at pr minute cost.

This NVIDIA grid architecture will be available for both Linux and Windows virtual machines and be available in a custom machine series called N-series

Permalink til innebygd bilde

Microsoft is uses DDA (Pass-trough) which is a feature that Microsoft does not have available within their Windows Server version with RemoteFX. My guess is the N1 & N10 series is basically using RemoteFX and splitting the memory into 2 slots.


So does this mean that Microsoft is moving toward with GPU passtrough on regular Windows Server as well? Hope so!

Microsoft also mentioned that is being available on CLIENT OPERATING SYSTEM, does this mean VDI is coming on Azure?This is not available as of now, but will be coming in preview later this year.

So if you plan on delivering GPU capable terminal server based computing in Azure, you need to compentace for the latency and the support catapabilities of the remote display protocol. Hence you should look into Citrix and the latest achivements they have with Framehawk & HDX and that Netscaler is now available in Azure, go figure.

Azure File Storage Generally available!

At AzureCon Microsoft finally announced that File Storage is out of preview and into GA! Also with it came alot of new features as well. such as

  • Support for SMB 3.0
  • File Explorer functionality within the Azure Portal
  • Support for HA workloads such as SQL, IIS and so on
  • Support to mount Azure file storage outside of Azure, meaning that we can mount an SMB file location on a Windows Computer as long as we have port 445 open ( Don’t worry this is using SMB 3.0 with Encryption) but this allows to setup a simple cloud storage solution.

So let us take a look at this feature is more depth. So in order to access the file explorer and such we need to use the preview portal. So this is my storage account within Azure


And here I have a file share available. A specific fileshare needs to have a quota attached to up. The upper limit is set to 5120 GB. Within the fileshare I also have a directory which I can do uploads to or connect


In order to connect to the fileshare from a computer we need to run this command

net use [drive letter] \\\filesharename /u:storageaccountname [storage account access key]

We can see its running SMB 3.0 which was part of Windows Server 2012 and not 3.02 which is in 2012 R2


Now the upload speed to the share is not great, SMB is not optimized for WAN since it is typically used for LAN-to-LAN connections, but is getting somewhere and I would also like to get security permissions on the specific directores.