Advanced backup options for Hyper-V 2012 R2 on Veeam

Some questions that come up again and again are advanced backup features in Hyper-V using Veeam. How does Veeam take backup from a Hyper-V host?

in a simple day to day virtual machine life, the read & writes consist of I/O traffic from a virtual machines to a VHD/VHDX. Residing on a network share. SAN/SMB and such.


When we setup Veeam to take backup of a virtual machine, what will happen is the following. First thing is that Veeam will trigger a snapshot using the Hyper-V Integration Services Shadow Copy Provider on that particular Hyper-V host that the virtual machine resides on. What will happen is that a AVHDX. This can either be done using an hardware VSS provier or software VSS provider.


A hardware provider manages shadow copies at the hardware level by working in conjunction with a hardware storage adapter or controller. A software provider manages shadow copies by intercepting I/O requests at the software level between the file system and the volume manager. The number of VMs in a group is limite depending on VSS provider. For a software VSS provider — 4 VMs, for a hardware VSS provider — 8 VMs.

NOTE: Using Offhost-proxy, requires an storage solution which supports an hardware transferrable shadow copies against a SAN. If we for instance use SMB based storage for Hyper-V we do not require this –>

Using onhost backup, means that the transport role will be using on a Hyper-V host which has access to the running virtual machines.

Make sure that the integration services and running and up to date before doing online backup, you can check this from Hyper-V PowerShell –> Get-VM | FT Name, IntegrationServicesVersion
More troubleshooting on interation services here –>

So what will happen in a online backup is (If all the requirements are meet)

1: Veeam will interact with the Hyper-V host VSS service and request backup of the specific VM

2: The VSS writer on the hyper-v host will then forward the reques tto the Hyper-V Integration components inside the VM guest OS

3: The integration components will then communicate with the VSS framework inside the guest OS and request backup of all VSS-aware application inside the VM

4: The VSS writers of application aware VSS will then get application data suiteable for backup

5: After the applications are quiesced the VSS inside the Virtual machine takes an internal snapshot using the software based VSS

6: The integration service component notifices the Hypervisor that the VM is ready for backup, and Hyper-V will then take a snapshot of the volume which the Virtual machine is located on. Then a AVHDX file will be generated, all WRITES will be redirected there.

7: The volume snapshot is presented to Veeam either using Off-host or on-host backup. (If the Off-host proxy is not available it will fallback to on-host proxy on a designeted host)

8: Data will then be processed on the proxy server and then be moved to the repository


NOTE: Off-host setup requires an dedicated Hyper-V host (It requires Hyper-V to have access to the VSS providers) and in case of using Off-host it cannot be part of the Hyper-V cluster, and make sure it has READ only access to the LUN and that your storage vendors supports readable shadow volume copies.

On-host backup will use the Veeam transport service on the Hyper-V machine. If the volume is placed on a CSV volume, the CSV Software Shadow Copy Provider will be used for the snapshot creation process.

NOTE: During the backup process, Veeam will try to use its own CBT driver on Hyper-V host to make sure that it only takes backup of only the changed blocks. (Since Hyper-V does not natively provide CBT, this will change in Windows Server 2016)

NOTE: If CBT is not working on Veeam run the command Reset-HvVmChangeTracking PowerShell cmdlet, or if the virtual machines are being shut down during backup process, try to disable ODX)

If Change block tracking is not enabled or not working as it should, the backup proxy will copy the virtual machine and use Veeam’s proprietary filtering mechanism. so Instead of tracking changed blocks of data, Veeam Backup & Replication filters out unchanged data blocks. During backup, Veeam Backup & Replication consolidates virtual disk content, scans through the VM image and calculates a checksum for every data block. Checksums are stored as metadata to backup files next to VM data.

So what about the more advanced features for Hyper-V

Hyper-V Settings

  • Enable Hyper-V guest quiescene

In case of application aware, The VM OS is suspsended and the content of the system memory and CPU is written to a dump file, in order to be able to perserve the data integrity of files with for instance transactional applications (This is known as offline backup)

Note that using this feature Veeam will not be able to perform application tasks like

    • Applying application-specific settings to prepare applications for VSS-aware restore at the next VM startup
    • Truncating transaction logs after successful backup or replication.
  • Take Crach consistent backup instead of suspending VM

If you do not want to suspend the virtual machine during backup, you can use crach consistent backup instead of suspending the virtual machine. This is equal to a hard reset of a virtual machine, this does not involve any downtime to a virtual machine but it does not preserve the data integrity of open files and may result in data loss.

  • Use changed block tracking data

Use the Veeam filter driver to look at changed blocks before data is copied to the offhost-veeam proxy or on-host proxy to the repository

  • Allow Processing of multiple VMs with a single volume snapshot

If you have multiple virtual machines within the same job, this feature will help reduce the load on the Hyper-V hosts.As this will trigger a volume snapshot for mulitple machines instead of a single virtual machine.

NOTE: The virtual machines much be located on the same host and must reside on a file share which uses the same VSS provider.

This is the first post of series – Veeam post and Hyper-V processing.

Hiding and publishing applications using XenDesktop 7.7 and Powershell

So when creating a delivery group in Studio you have limited capabilities into how we can control who gets access to a certain delivery group or application. NOTE This is not using Smart Access on the Netscaler, this is purely a Citrix Studio feature

. We have for instance filtering on users


And after we have created the delivery group we also have the option to define access rules, and as by default there are two rules that are created pr delivery group.


One rule that allows access using Access Gateway and one for direct connections using Storefront. So what if we need more customization options ? Enter PowerShell for Citrix…

First before doing anything we need to import the Citrix module in Powershell,

asnp citrix.*

Then we use the command Get-BrokerAccessPolicyRule (by default there are two rules for each delivery group. one called NAME_AG and one called NAME_Direct. The AG one is used for access via Netscaler Gateway, the other for direct to Storefront.

From this OS_AG Policy we can see that it is enabled, and allowedconnections are configured to be via Netscaler Gateway. And that it is filtered on Domain users.


We can see from the other policy, OS_Direct that it is set to enabled and that it is for connections notviaAG.


So how do we hide the delivery group for external users? The simples way is to set the accesspolicy true for AG connections to disable.

Set-BrokerAccessPolicyRule -name OS_AG -Enabled $false

Via Netscaler


Via Storefront


Or what if we want to exclude it for certain Active Directory User Group? For instance if there are some that are members of many active directory groups but are not allowed access to external sessions.

Set-BrokerAccessPolicyRule -Name OS_AG-ExcludedUserFilterEnabled $True -ExcludedUsers «TEST\Domain Admins»

This will disable external access to the delivery group for alle members of Domain Admins, even if they are allowed access by another group membership.

Azure Stack and the rest of the story

Now part two of my Azure Stack and Infrastructure story. Now Microsoft is doing a big leap in the Azure Stack release. With the release the current setup is moving towards more use of software-defined solution which are included in Windows Server 2016. This includes features like micro segmentation, load balancing, VXLAN, storage spaces direct (which is a hyperconverged confiuration of storage spaces)

We also have ARM which does the provisioning feature, using APIs and DSC for custom setup of virtual machine instances.

More details on the PaaS features will come during the upcoming weeks, and in the first release only Web Apps will be added.

So Microsoft is now providing features which before often being done by a third party and unlike Azure Pack this does not require any System Center parts and runs natively on Hyper-V


Now what else is missing in this picture? Since if we want to run this in a larger scale we need to think about the larger picture in a datacenter, using VXLAN will also require some custom setup.

Also with using Storage Spaces Direct in a Azure Stack fabric will also require RDMA networking infrastructure

(NOTE: Storage Spaces Direct has a limit in terms of max nodes)

(“Networking hardware Storage Spaces Direct relies on a network to communicate between hosts. For production deployments, it is required to have an RDMA-capable NIC (or a pair of NIC ports”) ref

This will also allow use the latest networking capabilities which is called SET Switch Embedded Teaming.

So in both cases you need a RDMA based infrastructure. So remember that! You need to rethink the physical networking. Another thing of the puzzle is backup. Now since Azure Stack delivers the management / proviosing and some fundamental services we need backup of our tenants data. Storage Spaces Direct deliver resilliency, but not backup. So for instance Arista has some good options in terms of RDMA, also since they support OMI which will allow for automation.

We need to enable a backup solution which can integrate to Hyper-V and have an REST API which can then allow us to build custom solutions into Azure Stack.

Also, a monitoring solution needs to be in place, Azure Stack adds alot of extra complecity in terms of infrastructure and alot of new services which are crucial especially in the networking/storage provider space. As of now I’m guessing that System Center will be the first monitoring solution which will support Azure Stack monitoring.

Another thing is load balancing, since we have more web based services for the different purposes and not MMC based consoles like we have in System Center, to deliver high-availability, (for instance the portal web, ARM interface and so on)

So in my ideal world, the Azure Stack drawning should look like this in my case.


Running Azure Stack on Vmware workstation nested

Well, Since the release of Azure Stack preview earlier today, I’ve been quite the busy bee… The only problem is that I didn’t have the adequate hardware to play around with it… Or so I thought.. I setup a virtual 2016 server on my Vmware Workstation.

Added some SATA based disks since I know this is “recommended hardware” as part of the PoC


Also remember to set it to Hyper-V (Unsupported)


After that I had to change some parameters in some of the scripts, since there is a PowerShell script which basically checks if the host has enough memory installed. This I changed within the Invoke-AzureStackDeploymentPreCheck.ps1


Now when you run the first AzureDeploy script it will mount the PoC install as a readonly VHD, and since the Invoke-AzureStackDeploymentPreCheck.ps1 is stored on the read only VHD you cannot do any changes to it. So you first need to change the DeployAzureStack script to mount the disk as read/write


You should also change the PoCFabric.xml which is located under AzureStackInstaller\PoCFabricInstaller and change the CPU and memory settings or else you won’t be able to complete the setup


After that, just look at it go!


Windows Azure Stack–What about the infrastructure Story?

There is no denying that Microsoft Azure is a success story, from being the lame silverlight portal with limited capabilities that it was to become a global force to be reckoned with in the cloud marketspace.

Later today Microsoft is releasing their first tech preview of their Azure Stack. Which allow us to bring the power of Azure platform to our pwn datacenters. It brings the same consistent UI and feature set of Azure resource manager which allows us to use the same tools and resource we have used in Azure against our own local cloud.

This of course will allow large customers and hosting providers to deliver Azure platform from their own datacenter. The idea seems pretty good thou. But what is actually Azure Stack ? It only deliver half of the promise of a Cloud like infrastructure. So I would place Azure stack within the category of cloud management platform. Since it is giving us the framework and portal experience

Now when we eventually have this setup and configured, we are given some of the benefits of the cloud which are

  • Automation
  • Self-Service
  • A common framework and platform to work with

Now if we look at the picture above there are some important things we need to think about in terms of fitting within the cloud aspect which is the computer fabric / network fabric and storage fabric which is missing from the Microsoft story. Of course Microsoft is a software company, but they are moving forward with their CPS solution with Dell and moving a bit towards the hardware space, but no where close yet.

When I think about Azure I also think about the resources which are beneath, they are always available, non-silo based and can scale up and down as I need to. Now if we think about the way that Microsoft has built their own datacenters there are no SAN archietecture at all, just a bunch of single machines with local storage with using software as the way to connect all this storage and compute into a large pool of resources, which is the way it should be since the SAN architecture just cannot fit into a full cloud solution. This is also the way it should be for an on-premises solution. If we were to deploy a Azure CloudStack to deliver the benefits of a cloud solution, the infrastructure should reflect that. As of right now Microsoft cannot give a good enough storage/compute solution with Storage Spaces in 2012 R2 since there are limits to the scale, and points of failure which a public cloud does not have.

Now Nutanix are one of the few providers which deliver support for Hyper-V and SMB 3.0 and does not have any scale limits and have the same properties as a public cloud solution. It agreegates all storage on local drives within each node into a pool of storage and with redundancy in all layers including an REST API which can easily integrate into Azure Stack, I can easily see that as the best way to deliver an on-premises cloud solution and a killer-combination.

Setting up Veeam Managed backup portal in Azure

Veeam now has available a new managed backup portal in the Azure marketplace, which will make it easier to do on-boarding / monitoring and multi-tenancy.

Integrated with Veeam Cloud Connect for Service Providers and available in the Microsoft Azure Marketplace, Veeam Managed Backup Portal for Service Providers makes it easy to acquire new customers and build new revenue streams through the following capabilities:

  • Simplified customer on-boarding: With a service provider administration portal, creating new customer accounts, provisioning services, and even managing customer billing and invoicing is easier than ever 
  • Streamlined remote monitoring and remote management: Daily monitoring and management of customers’ jobs is made simple and convenient, and can be done securely through a single port over SSL/TLS (no VPN required)
  • Multi-tenant customer portal: Clients remain engaged with a customer portal where they can set up users and locations, easily monitor backup health, review cloud repository consumption and manage monthly billing statements.

Now this as of now in tech preview available from Azure marketplace.


Which can deployed either using resource manager or using classic mode. After the deployment is done, you should do one last configuraiton which is to add a custom endpoint to be allowed to manage the setup externally over https. Which can be done under the security group endpoint settings.


NOTE: Before managing anything from the portal you need to add a license to the Veeam console, you can get a trial license here –> (Then connect to the virtual machine using RDP)

NOTE: The cloud connect seutp is already enabled, ports are also setup.

After adding the firewall rules for (destination port:443) source any we can configure the portal using the public IP address and port 443 (From there we login with our machine username and password, which was provisioned using the Azure portal)


After logging in into the portal I am greeted with the configuration wizard.


So we can start by creating a new customer


So we go trough the settings like a reguler setup and we choose a subscription plan


Next time I now logout and login again, I have a new portal dashboard, which gives me the quick overview.


We can also see that there is a new user created with description Veeam portal


now after we add a cloud gateway on the Azure machine, we can connect to it using an existing Veeam infrastructure


And configure and backup copy job and start doing copies to Azure. The end customer has its own portal (website) that they can access to see their status. They need to login using companyname\username and password on the same portal.


This is just a small post on what is to come!

Speaking at NIC 2015

In a couple of weeks I am lucky enough to be presenting two sessions at NIC 2015, here in Norway.
The first session I have is about Application virtualization vs Application layering. Where I will go a bit in-depth on how the differences are between the two technologies, and discuss a bit about the different strengths / weaknesses so for instance I will cover App-V, ThinApp, layering technologies such as Vmware AppVolumes, UniDesk and Citrix AppDisks

The second session is about delivering Office365 in a terminal server enviroment, where I will cover stuff like delivery options of Office, optimizing Skype/Outlook, bandwidth and IP requirements, and will also cover a bit more about Citrix Optimization Pack for Skype which now has a lot better support for Office365 and lastly, troubleshooting slow Office which is a common thing…

Besides that I will be standing in the Nutanix booth, so if you have time come and say hi!