Veeam Cloud Connect

So today Veeam announced their latest feature in version 8 which is something that i’ve been wanting for some time now, this feature is known as Cloud Connect –> http://go.veeam.com/v8-cloud-connect

The purpose of this feature is that it allows service providers to Office BaaS (Backup As a Service) to customers and allow them to integrate Veeam B&R to a Service Provider and allow them to use the service provider as a remote repostiory.

This requires that a service provider adds Cloud Gateway in their infrastructure and then customers can add them directly from the console in v8

 

And since this uses SSL it allows for true multi tenant without the use of VPN appliances between the provider and the customers.

Looking really forward to this feature! will be writing more about this when it releases.

Netscaler VPX and Vmware latency issue

In the many releases of Netscaler VPX (Starting with builds after 9.2) have had some minor issues with additional latency when running on VMware.

This has been a known issue for quite some time, and of course there has been a workaround available as well.

NetScaler VPX Appliance

  • Issue ID 0326388: In sparse traffic conditions on a NetScaler VPX virtual appliance installed on VMware ESX, some latency might be observed in releases after 9.3 as compared to release 9.2. If this latency is not acceptable, you can change a setting on the appliance. At the shell prompt, type:
    sysctl netscaler.ns_vpx_halt_method=2

    Perform a warm reboot for the above change to take effect. To have the new setting automatically applied every time the virtual appliance starts, add the following command to the /nsconfig/nsbefore.sh file:

    sysctl netscaler.ns_vpx_halt_method=2

But! I am happy to say that this has been fixed in the latest build (126.12) so we no longer require to run the commandline to fix the latency issue

Netscaler and routing

Something I’ve been planning to write for a while but with all the stuff happening lately, its hard to keep track. So this is a question that comes by now and then, how does netscaler handle route entries ?

Now a Netscaler often sits between many differnet networks with a leg in DMZ, one in the internal sone and other sones. Some deployments might be two-armed with more network attached to the Netscaler, and some require it to only be using one vlan because of security requirements.

image

Now what decides which network the Netscaler uses to communicate with the backend servers? Since Netscaler is a L3 device it uses IP and routing tables to determine where to go.

When you are deploying a Netscaler, one of the requirements is to setup a default gateway and a subnet IP. When you add a default gateway a route entry will be added to it automatically. This route entry looks like this

image

Which essentially says, all traffic which I have no information about will be sent to my default gateway which is 192.168.88.1.
So if my Netscaler sits on the IP 192.168.88.2 with a prefix of / 24 and the Netscaler needs to get in touch with 192.168.89.2, then the Netscaler will go trough the default gateway.

Now also when you add a subnet-IP another route entry is added automatically where the subnet IP itself is listed as a gateway IP for reaching another subnet. This Netscaler has two SNIPs. one in the 192.168.88.0/24 network and another in the 192.168.31.0/24 network

image

So all traffic destined to the 192.168.31.0 network is tunneled trough the 192.168.31.127 network. Another thing that is these route entries have a prefix of /24. Meaning that the Netscaler can contact 192.168.31.127 if it needs to get in touch with an IP within that range.

Then this means that the Netscaler might have multiple paths to other subnets ? Since my default-gateway might also have access to 31 and the 88 network. Like other layer 3 devices like Cisco looks at the prefix and then decides which is closest to the target. Netscaler operates only at the cost to get to the remote location. (Thanks to Andrew for that)

image

Now the default gateway route has a cost of 0

image

But the SNIP’s have a non-existing cost value

image

Meaning that they are prefered paths. If I was to have multiple SNIP’s which has access to a back-end service it might also get a conflict, this can be resolved using Net-profiles, this allows you to define which source ip adress should be used to connect to the back-end services.

Create: Net-Profile

image

Attach Net-Profile to a service

image

But what if you are required to use a one-armed deployment ? and need access to several backend networks for the service/probes to work properly.

Then you need to add a new static route which might look like this. This static route entry says the following. “If you need to access the 192.168.89.0/24 network you need to contact 192.168.88.1)

image

This new route will be listed as a static route and will have the same cost as the default gateway, but since this gateway sits closer to the targets in the 89. network it will be prefered over the default gateway.

So hopefully this clears up some confusion for people out there! Smilefjes

Azure RemoteApp

So during TechEd 2014 a couple of weeks ago, Microsoft announced Azure RemoteApp which for my part was the most exiting thing announced as TechEd. The idea behind it is to be able to publish “regular Windows applications” using Microsoft Azure directly to end-users using RDP.

Now with the late release of RDP clients for Android, iOS this allows customers to access their applications in Microsoft Azure using any devices. (Note that the RDP client’s were recently udpated for Android and iOS so take a look for an update)

Now there aren’t any pricing info published related to the serivce since it is currently in beta. But some info is released for

1: customers do not need to pay for bandwidth (going in and out)

2: customers do not need to pay additional licenses for instance RDS cal just to the applications they need published)

3: MIcrosoft Office 2013 will most likely be a part of it

4: Windows Server 2012 R2 is the only supported by Azure RemoteApp meaning that your applications that you want published needs to work on 2012 R2

5: If customers want to add their own applications they need to setup a VPN session in order

6: Each user has 50GB of storage of the remoteapp

Now we can also upload our own template image. There are some requirements here that needs to be in place.

  • The template image must be created using Windows Server 2012 R2 with Remote Desktop Session Host and the Desktop Experience feature installed.
  • Create a VHD template file. VHDX files aren’t supported.
  • Format the VHD as NTFS.
  • Don’t include an unattended xml config file in the sysprep image.
  • Don’t use VM mode to create a sysprep generalized image.

image

Now a here is what a RemoteApp service looks like, users will be able to access the service (during the preview) on https://www.remoteapp.windowsazure.com/ after I log in with my user I can start the following Office apps (which are included in the service)

image

Now the RemoteApp client is running RDP underneath

 image

But RemoteApp is not leveraging UDP but just RD gateway to tunnel the connections to a backend VM

image

But this is indeed going to be a interesting feature! just needs to be a bit polished and maybe leverage UDP as well and hopefully publishing a pricing calculator for RemoteApp

Veeam B&R 7 a list of issues and solutions

Now  I’ve been working with Veeam for a while now, and I’ve seen thatt mostly the case that when a backup job fails (or a surebackup job fails) or something fails, its most often not Veeam’s fault.

Veeam is a powerful product but it is dependant on alot of external features to function properly in order to do its job right. For instance in order to backup from a Vmware host, you need a vmware license in place in order to allow Veeam to access the Vmware VADP API’s.
If not Veeam can’t backup your virtual machines running on Vmware.

Also in order to do incremental backups properly Veeam is also dependant on CBT working properly on the hypervisor. So the real purpose of this blog post is mostly for my own part, but having a list of problems/errors that I come across in Veeam and what the fix is for it.

Now in most cases, when running jobs the job indicator will give a good pinpoint what the problem is. If not look into the Veeam logs which are located under C:\Programdata\Veeam\Logs (Programdata is a hidden folder) there is also possible to generate support logs directly from the Veeam console –> http://www.veeam.com/kb1832

Issue nr 1# Cannot use CBT when running backup jobs
Cannot use CBT: Soap fault. A specified parameter was not correct. . deviceKeyDetail: ‘<InvalidArgumentFault xmlns=»urn:internalvim25″ xsi:type=»InvalidArgument»><invalidProperty>deviceKey</invalidProperty></InvalidArgumentFault>’, endpoint: »

If CBT is for some reason not available and it not being used, Veeam has its own filter which it uses in these cases. Veeam will then process the entire VM and then on its own compare the block of the VM and the backup and see which blocks have changed, and the copy only the changed blocks to the repository. This makes processing time alooooot longer. Now in order to fix this you need to reset CBT on the guest VM. This can be done by following the instructions here –> http://www.veeam.com/kb1113 and one for Hyper-V CBT http://www.veeam.com/kb1881

Issue nr 2# Sure backup jobs fail with error code 10061 when running applications tests. This is most likey when a firewall is configured on the guest VM which only allows specific VMs. I have also seen this when a guestVM is a restarting state. If you do not have a guestVM firewall active, doing a restart of the guestVM and then do a new backup should allow the surebackup job to run successfully.

Issue nr 3# WAN accelerator failes to install. This might happen if a previous Veeam install has failed on a server. When you try to install the WAN accelerator the setup just stops without no parent reason. Something makes the installpath of the WAN cache folder to the wrong drive. You need to go into the registry of the VM and change the default paths as seen here –> http://www.veeam.com/kb1828

Issue nr 4# Backup of GuestVMs running on a hyper-v server with Windows Server 2012 R2 update 1, this is a known issue from Microsoft which requires an update from Microsoft –> http://www.veeam.com/kb1863

Issue nr 5# Application-aware image processing skipped on Microsoft Hyper-V server, this is of course related to many possible features. In most cases it is integration services, a list of the different causes and solutions are listed here –> http://www.veeam.com/kb1855

Issue nr 6# Logs not getting truncated on Exchange/SQL guest VMs, this requires application aware image processing and define that the backup job should truncate logs –> http://www.veeam.com/kb1878

Issue nr 7# Backup of vCenter servers –> http://www.veeam.com/kb1051

Issue nr 8# Backup using Hyper-V and Dell Equallogic VSS –> http://www.veeam.com/kb1844

Issue nr 9# Incredible slow backup over the network and no load on the servers, make sure that all network switches are full-duplex.

Issue nr 10# Win32 error: the network path was not found. When doing application aware image processing veeam needs to access the VM using the admin share with the credentials that are defined in the backup job. (For Vmware if the VM does not have network access Vmware VIX is used) It is possible to change the priority of these protocols –> http://www.veeam.com/kb1230

Software defined storage and delivering performance

I had no idea what kind of title I should use for this post, since this is more about to talk about different solutions which I find interesting for the time beeing.

The last couple of years have shown a huge growth in both converged solutions and software defined X solutions (Where the X can stand for different types of hardware layers, such as Storage, Networking etc)

With this huge growth, there are alot of new “player in the field” which are in this space, this post is more to show some of these new players and what their capabilities are, and most importantly where they fit in. Now I work mostly with Citrix/Microsoft products and as such there is often a discussion of VDI(meaning stateless/persistent/rdsh/remote app functionality)

and a couple of years ago when deploying a VDI solution you needed to have a clustered virtual infrastructure running on a SAN, and the VMs where constricted to the troughput of the SAN.

Now traditional SAN’s mostly run with spindel drives since they are cheap, and has huge storage spaces. For instace a PS6110E Array http://www.dell.com/us/business/p/equallogic-ps6110e/pd Has the ability to house up to 24x 3,5” 7,200 RPM disks.

Which can then be upwards to 96TB of data. Now if you think about it, regular spindel disks have about roughly 120 IOPS (Depending on buffers, latency and spindels) and we should have a kind of RAID set running on the array for redundancy across disks as well. Using 24x drivers with RAID 6 and double parity (not really a good example but just to prove a point) gives us a total IOPS of 2380, which is lower then my SSD drive in my laptop. Now of course most arrays come with buffers and caches in different forms and flavors so my calculation is not 100% accurate. Another issue with using a regular SAN deployment is that you are dependant on having a solid networking infrastructure and if you have some latency there as well it affects the speed of the virtual machines. So in summary

* regular SAN’s are built for storage space and not for speed
* SAN’s also in most cases need their own backend networking infrastructure

And based upon these two “issues” many new companies have their starting grounds. One thing I need to cover first is that both Microsoft and VMware have both created their own way to deal with these issues. First Microsoft has created a solution with Storage Spaces with SMB 3.0. Storage Spaces is a kind of software raid solution running on top of the operating system and with features such as deduplication and storage tiering which allows data to be moved from fast SSD’s to regular HDD depending on if the data is hot or not. Storage spaces can either be using JBOD SAS or internal disks depending on the setup you want.  And with using SMB 3.0 we have features such as multichannel, RDMA. Both of these solutions makes it easier for us to build our own “SAN” using our regular networking infrastructure. But note that this still requires we have a solid networking infrastructure, but this allows us to create a low cost SAN with a solid performance.

Vmware has choosen a different approach with the VSAN technology. Instead of having the storage layer on the “other” side of the network, they built the storage layer right into the hypervisor.

Meaning that the storage layer is on the physical machine running the hypervisor meaning that we don’t have to think about the network for the virtual machines performance (even thou it is important to have a good networking infrastructure for the VM’s to replicate across different hosts for availability)

Now with VSAN, you need to fullfill some requirements in order to get started, since this solution runs locally on each server you need for instance to have a SSD drive for just the caching part of it, you can read more about the requirements here –> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2058424

So its fun to see that for one,
* Microsoft still has the storage layer outside of the host but dramatically improves the networking protocol and add storage features on the file server.
* VMware moves the storage layer ontop of the hypervisor to move the data closer to the compute roles.

Now based on these ideas there are multiple different vendors which in essence bases their solution on the same.

First of we have Atlantis ILIO http://www.atlantiscomputing.com/products/, which is a virtual applicance which runs on top of the hypervisor. Now I’ve written about Atlantis before  http://msandbu.wordpress.com/2013/05/02/atlantis-ilio-2/ but in essence what it does is create a RAM disk on each host, and has the ability to use the SAN for persistent data (of course after the data has been compressed and deduped leaving a very small footprint) Now this solution allows virtual machines to run completely in RAM meaning that each VM has access to huge amounts of IOPS. So Atlantis also runs ontop of each hypervisor so it runs to close to the compute layer as possible and is not dependant on having high-end SAN infrastructure for persistence.

Atlantis has also recently released a new product called USX which is a more software-defined storage solution which allows to create pools of storage containing both local drives and or SAN/NAS (and not just a place to dump persistent data for VDI)

Secondly we have Nutanix, which unlike the others is not a regular software based approach, they deliver a hardware+software platform http://www.nutanix.com/the-nutanix-solution/architecture/#nav which has a kind of lego based approach, where you buy a node and compute and storage are locally and you can add more nodes to scale upwards. With Nutanix there are controller VM’s running on each node which are used for redundancy and availability. So in essence Nutanix have a solution which resembles alot of VSAN since you have the storage locally to the hypervisor and you have logic which is used for redundancy/availability.

And we also have PernixData which has their FVP product, which caches and accelerates reads & writes to the backend storage. Writes and reads are stored on the aggregated cache (which consists of either a flash drive such as Fusion-IO or SSD drives locally on each node) which allows IO traffic to be removed from the backend SAN.

image

 

Now there are also a bunch of other vendors, which I will cover in time. Gunnar Berger from Gartner also made a blogpost, showing the cost of VDI on different storage vendors http://blogs.gartner.com/gunnar-berger/the-real-cost-of-vdi-storage/ But most importantly this post is to give a bit awareness of some of the different products and vendors out there which allows you to think differently. You don’t always need to invest in a new SAN or buy expensive hardware to get the performance needed. There is a bunch of cool products out there just waiting for a test-drive Smilefjes

RemoteFX and vGPU 2012R2 requirements

Now there has been a lot of speculation with RemoteFX with the latest 2012R2 release. RemoteFX is a set of different feature. One of these features is the socalled vGPU.

vGPU is a feature which allows us to share GPU hardware between virtual machines. Now one thing that is important for those that wish to use vGPU feature on RemoteFX with 2012 R2 is that it is ONLY supported on client OS, meaning that is only supports Windows 7/8/8.1 Enterprise editions running as a guest VM on a 2012 R2 Server. Meaning that you cannot run a RDSH server and use the vGPU feature.

Microsoft has made a list of different RemoteFX features and listed the compability matrix here –> http://blogs.msdn.com/b/rds/archive/2012/11/26/remotefx-features-for-windows-8-and-windows-server-2012.aspx

And also important to remember that you can only use RemoteFX adapters on a Generation 1 virtual machine (It is not available on Generation 2) You can read more about the configuration and setup here –>

http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx

Microsoft has also made a list of different GPUs which make a good candidate for RemoteFX vGPU

http://blogs.msdn.com/b/rds/archive/2013/11/05/gpu-requirements-for-remotefx-on-windows-server-2012-r2.aspx

RemoteFX only supports DX h/w acceleration. OpenGL support is a feature under consideration. If you are interested in learning how much vRAM is added to VMs using RemoteFX you can read more about here –> http://blogs.msdn.com/b/rds/archive/2013/12/04/remotefx-vgpu-improvements-in-windows-server-2012-r2.aspx

If you are having some issues with performance make sure that you have the latest drivers from the GPU vendor.

Citrix Synergy 2014 day 1 summary

So Citrix just recently had their first keynote of their annually Synergy conference and there are some exiting new features coming this year. So for those that haven’t seen or read about the updates here is a quick update list.

* Citrix Workspace Suite (Which is a bundle which combines XenDesktop (Platinum) and XenMobile (Enterprise) which is available now. http://www.citrix.com/products/citrix-workspace-suite/overview.html

* Citrix Receiver X1 (Which is a new receiver which combines MDX and HDX technology into one and the same receiver

* Google Receiver (Is coming with a new HTML 5 receiver for Chromebooks which will have USB redirection and stuff like that.

* Citrix XenMobile 9 (With support for stuff like Windows Phone 8 and so on, more here –> http://www.citrix.com/news/announcements/may-2014/xenmobile-synergy-announcement.html) and with new WorX apps such as WorxNotes, WorxEdit, WorkDesktop (Which is a gotomypc app.

* Netscaler 10.5 which has more features releated to mobile traffic and with it MobileStream http://www.citrix.com/news/announcements/may-2014/netscaler-synergy-announcement.html (A blog post is coming later releated to 10.5 when I am allowed to do so Smilefjes) Beta is available now.

* Citrix Workspace Services ( A cloud platform to deliver DaaS and virtual apps) which is cloud agnostics, where Azure is one of the options. You also have Amazon, Softlayer and so on. Which allows you to create services on any type of cloud provider. You can read more about this from Brad Anderson at Microsoft here –> http://blogs.technet.com/b/in_the_cloud/archive/2014/05/06/collaborating-with-citrix-on-the-future-of-daas.aspx 
A tech Preview of this is coming second half 2014 but you can sign up for the tech preview when available here –> http://deliver.citrix.com/WWTP0514CCPSMCLOUDWORKSPACESVCS.html and a bit more info here –> http://blogs.citrix.com/2014/05/06/citrix-workspace-services-mobile-workspaces-on-the-best-cloud-yours/

* Updates for Sharefile with new connectors! ( http://www.citrix.com/news/announcements/may-2014/sharefile-synergy-announcement.html) makes it easier to connect to personal file storage providers such as OneDrive / DropBox etc.)

So far a good day 1, looking forward to day 2 keynote.

Azure Multifactor authentication and Netscaler AAA vServer

Microsoft has done a great job adding features to the cloud platform over the last year, one of which is Azure MFA (Multi Factor Authentication) which allows a user to login with his/hers username and password and a second option which might be a pin-code or one time pin or something else.

Now just to show how we can use Azure MFA with non-windows services I decided to give it a try with Citrix Netscaler AAA vServer. So here is a overview of how the service looks like.

The Azure MFA requires a local server component which proxies authentication attempts between the client and the authentication server. In my case I use the MFA component as an RADIUS server and then proxies RADiUS connections to the AD domain and adds the two-factor component on top.

image

The Netscaler AAA vServer can be used to proxy authentication attempts to backend services, such as Exchange, RDweb and such. This is the type that is also used when logging into a Netscaler Gateway session.

Now for the purpose of this demonstration, I setup a load balanced web-service which consist of two web servers. The webservers themselves have no authentication providers, so therefore I needed to create an AAA vServer on the Netscaler which users will be redirected to in order to authenticate to see the web content.

image

So a simple load balanced services, and then I added a AAA vServer to the service.

image

Note that the aaa.test.local is an internal service on the Netscaler (Make sure that DNS is in place and a nameserver is added to the Netscaler) In order to create the AAA vServer go into Security –> AAA –> Virtual Servers and choose create new.

There we need to create a new server, and make sure that the domain name is correct and that a trusted certificate is added

image

Then under Authentication we need to define a authentication server. Now this can be setup to forward authentication attempts to RADIUS, LDAP, LOCAL, SAML and so on. Since we want to use Azure FMA here we can use RADIUS.

Now in my case I created a authentication policy where I used the expression ns_true which means that all users going trough the Netscaler are going to recieve this policy

image

My authentication policy looks like this. The Authentication server here is the server which is going to get the Azure MFA service installed (I also predefined a secret key) Also important that the time-out here is put to 60 seconds, this is to grant enough time for the authentication to finish.

image

Remember certificates here are important! if the clients does not trust the certificate you will get a HTTP 500 error messages.

Now after this is done we can start setting up Azure MFA. First off, make sure that you have some sort of DirSync solution in place so that you can bind a local user to a user in Azure AD. If you do not have this, just google DirSync + Azure you’ll get a ton of blogposts on the subject Smilefjes

In my case I didn’t have DirSync setup so I created a new local UPN which resembled the usernames@domains in Azure so that the MFA service managed to bind a local user to a azure user.

Firstly you need an Azure AD domain

image

Then choose create new multi-factor auth provider

image

After you have created the provider, mark it and choose Manage. from there you can download the software.

image

Now download the software and make sure that you have an server which you can install it on. When installing the server components you are asked to enter a username and password for authentication, this user can be generated from the Azure portal

image

You are also asked to join a group, this is the same group that you created when setting up the multi-factor authenticaiton provider in Azure.

During the installation wizard you are asked to use the quick setup, here you can configure the wizard against RADIUS automatically.

image

Then you are also asked to enter the IP address of the RADIUS client, this is the Netscaler NSIP.

image

After you are done here, finish the wizard and start the MFA application. Firstly make sure that the RADIUS client info is correct

image

Then go into Target. Since we want the MFA server to proxy connections between the RADIUS client and the AD domain, choose Windows Domain as target

image

Then go into Directory Integration and choose either Active Directory or choose specific LDAP config if you need to use another AD username and password.

image

Next go into Users, and choose which Users are enabled for two-factor authentication. In my case I only want one. Here I can define what type of two-factor I want to use for my user.
If I choose phone-call with PIN I get a auto generated phonecall where I can enter my pin code directly.

image

Now I have also added my phone number so the service can reach me with a OTP. So after all this is setup I can try to login to my service.

image 

Login with my username and password and voila! I get this text message on my phone.

Screenshot_2014-05-06-01-00-32

After I reply with the verification code, I am successfully authenticated to the service.

image

VMCE study guide

Now for those working with Veeam, a hot topic these days is the VMCE (Veeam Certified Engineer) certification. In order to take this exam you first need to attend a 3-day technical course which covers the syllabus, then you are allowed to take the exam.

The exam consists of 50 random questions which are multiple choice, and you need 70% to pass the exam.

Now as an Veeam instructor I get questions regarding where can I find more info about the different subjects and a bit more regarding best practice regarding each subject ?

Therefore I created this study guide which consists of links to each module in the syllabus. First of you need to take the course, get yourself familiarized with the GUI and where options are stored in the GUI. Know the different components, where they can be placed and how traffic flows between the different components and look at some sample scenarios for instance which are listed in the evaluators guide.

Sample guides:
Support for Hypervisors:
Hyper-V: http://veeampdf.s3.amazonaws.com/datasheet/product-info-veeam-support-for-windows-server-2012-r2.pdf?AWSAccessKeyId=AKIAJI4MX44AEVG3NBLA&Expires=1398891896&Signature=9TThHxabKQdTtUd7SGpFtqmhPdU%3D
VMware: http://veeampdf.s3.amazonaws.com/datasheet/product-info-veeam-support-for-vsphere-5-5.pdf?AWSAccessKeyId=AKIAJI4MX44AEVG3NBLA&Expires=1398891896&Signature=HSvgsxXXvEk%2B%2BovZDrD%2FMat%2BvMU%3D

Best-practice for backup and replication deployment:
http://veeampdf.s3.amazonaws.com/guide/veeam_backup_7_0_deployment_vmware.pdf?AWSAccessKeyId=AKIAJI4MX44AEVG3NBLA&Expires=1398891903&Signature=F5KVQ5urIkTL%2BFXbRAcp6T5kHgs%3D

Best-practice for HP storage and Veeam:
http://veeampdf.s3.amazonaws.com/guide/wp_veeam_hp_configuration_2.pdf?AWSAccessKeyId=AKIAJI4MX44AEVG3NBLA&Expires=1398891915&Signature=Qm6cHw%2FqTRwZbLaqQB3pZzBlaMk%3D

Evaluators guide for VMware:
http://veeampdf.s3.amazonaws.com/guide/veeam_backup_evaluators_guide_7_vmware.pdf?AWSAccessKeyId=AKIAJI4MX44AEVG3NBLA&Expires=1398891922&Signature=Qj0COAOPG5qq8O8r3HvVRbfLVFU%3D

Syllabus:

Backup Methods
http://helpcenter.veeam.com/backup/70/hyperv/index.html?backup_methods.html

Scheduling
http://helpcenter.veeam.com/backup/70/hyperv/scheduling.html
 
Changed Block Tracking (CBT)
http://helpcenter.veeam.com/backup/70/hyperv/changed_block_tracking.html

Compression and Deduplication
http://helpcenter.veeam.com/backup/70/hyperv/compression_deduplication.html

Retention Policy
http://helpcenter.veeam.com/backup/70/hyperv/retention_policy.html

Auto Discovery of Backup and Virtual Infrastructure
http://helpcenter.veeam.com/one/70/vsphere/configuring_veeam_one_monitor.html

Business Categorization
http://helpcenter.veeam.com/one/70/vsphere/assigning_categorization_value.html

Pre-Defined Alerting
http://helpcenter.veeam.com/one/70/vsphere/appendix_alarm_rules_events.html

http://helpcenter.veeam.com/one/70/vsphere/alarms.html

Agentless data gathering
http://helpcenter.veeam.com/one/70/vsphere/introducing_veeam_one_business_view.html?zoom_highlightsub=agentless

Hyper-V specific features
http://veeampdf.s3.amazonaws.com/guide/veeamone_7_0_deployment_guide.pdf?AWSAccessKeyId=AKIAJI4MX44AEVG3NBLA&Expires=1398887835&Signature=CMfYdlrWg9qEN4kbcOJdWH%2Fidps%3D

Veeam One Deployment
http://veeampdf.s3.amazonaws.com/guide/veeamone_7_0_deployment_guide.pdf?AWSAccessKeyId=AKIAJI4MX44AEVG3NBLA&Expires=1398887835&Signature=CMfYdlrWg9qEN4kbcOJdWH%2Fidps%3D

http://helpcenter.veeam.com/backup/70/vsphere/install_vbr.html

Deployment Scenarios
http://helpcenter.veeam.com/backup/70/vsphere/deployment_scenarios.html
http://helpcenter.veeam.com/backup/70/vsphere/components.html

Prerequisites
http://helpcenter.veeam.com/backup/70/vsphere/planning.html
 
Upgrading Veeam Backup & Replication
http://helpcenter.veeam.com/backup/70/vsphere/upgrade_vbr.html

Adding Servers
http://helpcenter.veeam.com/backup/70/vsphere/setup_addserver.html

Adding a VMware Backup Proxy
http://helpcenter.veeam.com/backup/70/vsphere/add_vmware_proxy.html

Adding a Hyper-V Offhost Backup Proxy
http://helpcenter.veeam.com/backup/70/hyperv/add_hyperv_proxy.html

Adding Backup Repositories
http://helpcenter.veeam.com/backup/70/hyperv/setup_addrepo.html

Performing Configuration Backup and Restore
http://helpcenter.veeam.com/backup/70/hyperv/export_vbr_config.html
http://helpcenter.veeam.com/backup/70/hyperv/restore_vbr.html

Creating Backup Jobs
http://helpcenter.veeam.com/backup/70/hyperv/backup_job.html
http://helpcenter.veeam.com/backup/70/hyperv/options_parallel_processing.html

Creating VM Copy Jobs
http://helpcenter.veeam.com/backup/70/vsphere/index.html?vm_copy.html

Instant VM Recovery
http://helpcenter.veeam.com/backup/70/hyperv/performing_instant_recovery.html

Insight into Replication
http://helpcenter.veeam.com/backup/70/hyperv/index.html?intro.html

Insight into Failover
http://helpcenter.veeam.com/backup/70/hyperv/performing_failover.html

Insight into Failback
http://helpcenter.veeam.com/backup/70/hyperv/performing_failback.html

SureBackup Recovery Verification
http://helpcenter.veeam.com/backup/70/vsphere/recovery_verification.html

SureReplica
http://helpcenter.veeam.com/backup/70/vsphere/recovery_verification_surereplica.html

Restoring Microsoft Exchange and SharePoint objects
http://helpcenter.veeam.com/backup/70/vsphere/vex.html
http://helpcenter.veeam.com/backup/70/vsphere/working_with_vesp.html

Working with Veeam Backup & Replication Utilities
http://helpcenter.veeam.com/backup/70/hyperv/extract_utility_console_restore.html

3-2-1 rule
http://www.veeam.com/blog/how-to-follow-the-3-2-1-backup-rule-with-veeam-backup-replication.html

Working with Tape Media
http://helpcenter.veeam.com/backup/70/vsphere/working_with_tape_media.html

Wan Accelerator
http://helpcenter.veeam.com/backup/70/vsphere/wan_add.html
http://helpcenter.veeam.com/backup/70/vsphere/wan_acceleration.html

Offsite Backup Copy Job
http://helpcenter.veeam.com/backup/70/hyperv/offhost_proxy_advanced.html
http://helpcenter.veeam.com/backup/70/hyperv/backup_copy_job.html

Delegate file and VM restores with Veeam Backup Enterprise Manager
http://helpcenter.veeam.com/backup/70/em/performing_1-click_file_restore.html
http://helpcenter.veeam.com/backup/70/em/1click_vm_restore.html

Veeam Backup Enterprise Manager RESTful API
http://helpcenter.veeam.com/backup/70/em/used_ports.html

HP StoreVirtual VSA
http://helpcenter.veeam.com/backup/70/vsphere/hp_san_support.html
http://helpcenter.veeam.com/backup/70/vsphere/hp_san.html

Product Editions Comparison
http://helpcenter.veeam.com/backup/70/vsphere/editions.html

Følg med

Få nye innlegg levert til din innboks.

Bli med 38 andre følgere