Hyper-V and Storage features deep-dive comparison with Nutanix

So another blogpost in this storage series with Hyper-V, in the previous posts I discussed a bit about what features Hyper-V has and the issues with them. Well time to take that to the next level. Just to show how Nutanix solves the performance issues and how Microsoft does it with their Windows Server features.

First of we have the native capabilities with Windows Server and Storage Spaces. We can benefit from SMB 3 and for instance mutlichannel with RSS and Jumbo frames which allows for much less overhead in a TCP network, of course it requires some knowledge on congestion algoritms to use as well to be able to use the full troughput

We can also use tiering in the back-end with the default write-back cache feature (which by default is on 1 GB) and during night the tiering feature run an optimization task that moves the hot data to the SSD tier and the cold data to the HDD tier.

On the other hand we can have a RDMA deplouyment which in essence removes the TCP/IP stack completly and does zero-copy network capabilities, and we can use this in conjunction with CSV cache which only provides benefits for read-only unbuffered I/Os in RAM on the host, this feature can be enabled on a CSV disk level and is integrated into failover cluster manager and is leveraged on all the hosts in a cluster. but… this feature is disabled for a tiered stoarge space CSV therefore they can not be both activated on the same deployment.


In the Nutanix I/O Path things are a bit different, since the CVM (Controller VM) serves content locally from the node to the hyper-V host using SMB using disk passtrough locally.


The I/O fabric in a Nutanix node consists of many different logical stores. First of we have the Content Cache which is an deduplicated read cache which consists of both memory and SSD. Which is serverd from the memory of the CVM. Here we have the ability to leverage from inline deduplication.

Then we have the OpLog which is built to handle random I/O, when dealing with bursts of random I/O it coalesce them and then sequentially drains it to the other Store (Extent Store) The oplog is on the SSD tier. In case of sequencial Write I/O  the Oplog is bypassed and is then writen directly to the Extent Store.  The Oplog is also replicated to one or more nodes in a cluster to handle high-availabilty.

The Extent Store serves as persistent data storage in a Nutanix node, which consists of SSD and HDD. Data coming into the extent store is either directly as sequential write I/O or drained from the Oplog. The Extent store can also leverage from deduplication, this is a cross cluster deduplication feature, meaning that all nodes participate. 

So as we can see Nutanix leverages tiering, deduplication, in-memory caching while maintaining availability for data across nodes in a cluster, and combining this with data locality to deliver the lowest form of latency.

Wire Data in Operations Management Suite

Microsoft finally released a new solutions pack to Operations Management suite the other day, which I have been waiting for since Ignite! WireData!!!

This is an solution pack that gathers meta data about your network, it requires a local agent installed on your servers as with other solution packs but allows you to get more detailed information about network traffic happening inside your infrastructure.

So if you have OMS you just need to go into the solution pack and add the wire data pack


But note that after adding the solution pack It need a while to gather the necessery data to create a sort of baseline about the network traffic.


After it is done it groups together the communication that has happend on the agents to see what kind of protocols that are often in use


For instance I see that there is alot of Unknown traffic happening on my agent, I can do a drill down to see more info about that particular traffic. Then I can see in detail where the traffic is going


I can also do a drill down to se what kind of process is initiating the traffic going back and forth. Something I would like to see in this, is the ability to add custom info, lets say for instance if I have a particular application running which uses some custom ports and processes I would like to add a custom name to that application so It can appear in the logs and in the overview.

Other then that it provides some great insight in what kind of traffic is going back and forth inside the infrastrucutre, and Microsoft has added some great common queries.


Upcoming events!

It has been a bit quiet here lately (well there has been some activity but not as noisy as it used to be) therefore I decided to give a quick update to tell everyone what’s going on for my part.

At the moment I’m quite busy writing two books! that’s right two!

One is an update to one of my existing books, http://www.amazon.co.uk/Implementing-Netscaler-Vpx-Marius-Sandbu/dp/178217267X/ref=sr_1_1?ie=UTF8&qid=1438541424&sr=8-1&keywords=netscaler which is an update for this book to V11 which was released a couple of months back and therefore there is much new content in there such as

* Unifed Gateway
* Mobilestream
* V11 in general
* More in-depth on traffic optimization (HTTP/2 SPDY, TCP, Multi-PE and so on)
* Azure and Amazon deployment

And of course much more!

I am also writing a mastering Netscaler book which will go in much more depth where I am co-writing with another Citrix Consultant, really looking forward to this book as well. Both these books are going to be release Q4 this year so busy time ahead.

Also in other releated events I am delivering a session on Microsoft EMS (Intune, Azure AD, Azure RMS and ATA) at Trond E Haavarstein aka @xenappblog’s virtual expo which is here –>  https://xenapptraining.leadpages.net/xbve2015/ joined by alot of rockstar community people! hurry up if you want to join is close to about 1000 attendees!

Also later in August I’m holding a local seminar at Microsoft Norway where I am going to talk about Azure AD and Windows 10 a talk a bit more about the different scenarios when in a hybrid setup and so on.

So this happens August the 19th, so if you want to join send me a wink. Other then that stay tuned!

How Nutanix works with Hyper-V and SMB 3.0

In my previous blog post I discussed a bit about software defined options using Hyper-V https://msandbu.wordpress.com/2015/07/28/software-defined-storage-options-for-hyper-v/ and that Windows Server is getting alot of good built-in capabilities but lacks the proper scale out solution with performance, which is also something that is coming with Windows Server 2016.

Now one of the vendors which I talked about which has a proper scale-out SDS solution for Hyper-V with support for SMB 3 is Nutanix, which is the subject for this blogpost where I will describe how it works for SMB based storage, now before I head on over to that I want to talk a little bit about how SMB 3 and some of the native capabilities and why they do not work for a proper HCI setup.

With SMB 3.0 Microsoft Introduced two great new features, which was SMB Direct and Multichannel, which are features that are aimed for higher troughput over lower latency.

SMB Multichannel (leverages multiple TCP connections across multiple CPU cores using RSS)

SMB Direct (allowing for RDMA based network transfer, which does bypasses the TCP stack and moving data from memory to memory which gives low overhead, low latency connections.

Now both these features allow us to leverage better NIC utilization, but is aimed for a traditional configuration where storage is still a seperate resource from computing. My guess is that when we are going to deploy a Storage Spaces Direct cluster on Windows Server 2016 in a HCI deployment these features will be disabled.

So how does Nutanix work with SMB 3 ?


First of, important to understand the underlaying structure of the Nutanix OS. First of all local storage in the Nutanix nodes from a cluster are added to a unified pool of storage which are part of the Nutanix distributed filesystem. On top of this we create containers which have their settings like compression, dedup and replication factor which defines the amount of copies of data within a container. The reason for these copies are for fault-tolerance in case of a node failure or disk failure. So in essence you can think about this is a DAG (Database availability Groups) but for virtual machines.

So for SMB we can have shares which are represented as containers which again are created on top of a Nutanix cluster.  Which are then presented to the Hyper-V hosts for VM placement.

Also important to remember that even thou we have a distributed file system across different nodes, the data is always run locally for a node (reason for this is so that the network does not becoming a point of congestion) Nutanix has a special role called the Curator (Which runs on the CVM)which is responsible for moving the hot data as local to the VM as possible. So if we for instance do a migration from host 1 to host 2, the CVM on host 1 might still contain the VM data and then reads and writes will from host 2 to CVM on host 1 the CVM will start to cache the data locally.

Now since this architecture leverages data locallity there is no need for feature like SMB Direct and SMB multichannel so therefore these features are not required in a Nutanix deployment for Hyper-V, however is does support SMB transparent failover which allows for continuously available file shares.

Now I haven’t started to explain yet how this architecture handles I/O yet, this is where the magic happens. Stay tuned.

Software defined Storage options for Hyper-V

As I see that Hyper-V gaining more and more traction, I also see that we are in the need for better storage solutions around it. Now Microsoft has Storage Spaces which came in 2012 and introduces features like Dedup as well. Problem with the deduplication feature is that it was mostly aimed at VDI enviroments (for Hyper-V) and not tradisional servers and was limited to one thread, in Windows Server 2016 this is expanded with support for backup workloads. Storage Spaces was also enhanced with tiering in 2012 R2 which gives the abiility to add SSD disks add move data between tiers on a storage spaces setup and also gives us the ability to do Write-back cache for random writes. In the upcoming Windows Server 2016, we know that we will have the option to do Storage Spaces Direct (Meaning local attached disks on server nodes to work as a streched cluster) just like VSAN and so on) which can either act as a Scale-out file server cluster or as an hyper converged solution combining SMB and Hyper-V on the same roles. Which gives an architectual advantage since it allows us to scale much simpler (to the amount of nodes supported which is set to 32)

Microsoft also introduced SMB 3.0 protocol which allows for scale out communications with features such as

  • SMB Multichannel (Which allows to use multiple network connections as the same time)
  • SMB Direct (Which gives low-latency conections over RDMA)
  • Usage for SQL and Hyper-V over SMB

So SMB is good for fault-tolerance and high troughput options, and with RDMA it gives us low latency connections but it is still limited to the disks and controllers which are behind the SMB file servers, and using SMB with regular network cards is still TCP(Which has about 5 –8% overhead if not configured properly), which in most cases will perform slower then localized virtual machines on individual hosts, so what about other options and using memory as a tier?

Here are some numbers to chew on (From Jeff Dean) about speed where Memory is a bit of the equation.

L1 cache reference                             0.5 ns
Branch mispredict                              5 ns
L2 cache reference                             7 ns
Mutex lock/unlock                            100 ns (25)
Main memory reference                        100 ns
Compress 1K bytes with Zippy              10,000 ns (3,000)
Send 2K bytes over 1 Gbps network         20,000 ns
Read 1 MB sequentially from memory       250,000 ns
Round trip within same datacenter        500,000 ns
Disk seek                             10,000,000 ns
Read 1 MB sequentially from network   10,000,000 ns

Microosft also introduce something called CSV cache (Which was available from Server 2012) which allows us to allocate system memory as a write-trough cache. The CSV Cache provides caching of read-only unbuffered I/O Which in essence makes it work good with Hyper-V clusters and Scale-out file servers using CSV

Problem with CSV cache is does not work with.

  • Tiered Storage Space with heat map tracking enabled
  • Deduplicated files using in-box Windows Server Data Deduplication feature (Note:  Data will instead be cached by the dedup cache) 
  • ReFS volume with integrity streams enabled (Note:  NTFS is the recommended file system as the backend for virtual machine VHDs in production deployments)

Means that we cannot get the best of both worlds, where we could combine Memory, SSD, and HDD in the same storage pool.

Another thing is that Microsoft does not offer inline-dedup for storage traffic, their dedup engine runs as a background task (post process)

With Windows Server 2016 Im saying that Microsoft is moving towards a feature set which gives their customers a basic feature set of what they need in the software defined storage space

  • Hyper convereged (Storage Spaces Direct)
  • Tiering capabilities
  • Enhanced decuplication
  • High troughput on SMB
  • Low cost

So for those that require more Performance, Feature and so on for Hyper-V, in terms of what options are there?

For Vmware there are already a long list of different vendors that deliver storage optimization / SDS / HCI solutions

  • Atlantis
  • Pernixdata
  • Nexenta
  • Nutanix
  • SimpliVity
  • VSAN
  • DataCore

Both Atlantis and SimpliVity have stated that they will have support Hyper-V “Soon”. Atlantis does have support for Hyper-V on their ILIO product but not for USX.

As of now only Nutanix and DataCore have full support for Hyper-V and SMB 3.0 both of them offer more flexibility in terms of features and better performance with use of memory as a tier which is just of the basic stuff. Tune in as I will explore these features troughout the next blogposts and show how they differ from the built-in features in Microsoft.

NOTE: The vendors that are in the list, are the ones I know about, I didnt do a very long check so if someone knows about someone else please let me know.

New award – Veeam Vanguard

Received some good news today, (Which I have known for quite some time) but it is only now that I am allowed to talk about it Smilefjes

I have been quite active regarding Veeam on my blog and much work related since I am a Veeam Instructor and a general evangelist for their products, so therefore I was quite thrilled when Veeam announced a new community award called Veeam Vanguard and that I was one of the awardees!

and now I join the ranks of other skilled IT-pros in the community such as, Thomas Maurer, Rasmus Haslund and a fellow Norwegian Christian Mohn

Thanks to Veeam!

More info on the Vanguard page here — http://www.veeam.com/vanguard.html

Getting started with Microsoft Advanced Threat Analytics

This is something I have been meaning to try out for a while, since the preview release at Ignite. Advanced Threat Analytics is a new software from Microsoft (which comes from a purchace Microsoft did a while back) but it focuses on some of the more common problems with security in Windows enviroment, such as Golden tickets, Pass the hash, abnormal user behavior and so on.

Now Microsoft ATA is pretty simple architecture it consist of two components and a MongoDB base where the data is stores, the two components

The ATA Center performs the following functions:

  • Manages ATA Gateway configuration settings

  • Receives data from ATA Gateways

  • Detects suspicious activities and behavioral machine learning engines

  • Supports multiple ATA Gateways

  • Runs the ATA Management console

  • Optional: The ATA Center can be configured to send emails or send events to your Security Information and Event Management (SIEM) system when a suspicious activity is detected.

The ATA Gateway performs the following functions:

  • Captures and inspects domain controller network traffic via port mirroring

  • Receive events from SIEM or Syslog server

  • Retrieves data about users and computers from the domain

  • Performs resolution of network entities (users and computers)

  • Transfers relevant data to the ATA Center

  • Monitors multiple domain controllers from a single ATA Gateway

These roles can be deployed on two different virtual machines or on the same VM, really important that during setup of the ATA center, define that communcation happen using the external IP on Center communication and management IP. By default it sits on then you need to install both components on the same server.

ATA Center Configuration

Now the Gateway needs to be able to see the DC (or Global Catalogs) traffic using Port Mirroring, which can either be used in a physical enviroment with SPAN or RPSAN, or we cna setup port mirroring in a virtualized fashion.

I have my demo enviroment running on Hyper-V which allows me to easily setup Port mirroring. First thing I need to do is configure the NIC on my DC to do port mirroring.


Then I need to add another NIC on my Gateway VM and configure that as a destination mirroring mode.


I also need to enable the NDIS monitoring filter on the vSwitch


Before the initial setup note that there are some limitations in the preview…

Make sure that KB2919355 has been installed!

Only enter domain controllers from the domain that is being monitored. If you enter a domain controller from another domain, this will cause database corruption and you will need to redeploy the ATA Center and Gateways from scratch!

After you have deployed both components, all you need to do is define the domain controller and NIC, in the management console.


Now after this is done we can verify that it has connectivity by checking the dashboard and search for a user


Now by default ATA takes about 2 weeks before it can etasblish a baseline for how regular activity works, but it has some default alters which we can trigger to make sure that it works as it should. For instance we can use a DNS reconnasince attack


Simple nslookup and ls paramter. This will then trigger in the console


Since this is still preview it has a some limitations, as of right now it cannot detect PtH, so stay tuned for more about this when the full release comes.