Veeam Management pack for Hyper-V and Vmware walktrough

Yesterday, Veeam released their new management pack which for the first time includes support for both Vmware and Hyper-V. Now I have gotten a lot of questions regarding (Why have Hyper-V monitoring if Microsoft has it ?) well Veeam’s pack has alot more features included, such as capacity planning, heat maps and so on.

The management pack can be downloaded as an free trial from veeam’s website here –> http://www.veeam.com/system-center-management-pack-vmware-hyperv.html

Now as for the architecture of the functionality here it’s quite simple

image

First of there are two components.

* Veeam Virtualization Extesions (Service and UI) it manages connections to VMware systems and the Veeam Collector(s), controling licensing, load balancing, and high availability

* Veeam Collector component gathers data from VMware and injects its information into the Ops Agent.

It is possible to install all of these components on the management server itself. You can also install the collector service on other servers which have the Opsmgr agent installed. The virtualization extension service must be installed on the management server.

In my case I wanted to install this on the mangement server itself, since I have a small enviroment. Before I started the installation I needed to make sure that the management server was operating in proxy mode.

8

Next I started the installation on the management server. Now as with all of Veeams setup it can automatically configure all prerequisites and is pretty straight forward. (Note it will automatically import all required management packs into SCOM1

If you have a large enviroment it is recommended to split ut collectors into different hosts and create a resource pool (There is an online calculator which can help you find out how many collectors you need) http://www.veeam.com/support/mp_deployment.html

You can also define if collector roles should be automatically deployed

2

After the installation is complete (using the default ports) you will find the extensions shortcut on the desktop

4

By default this opens a website on the localhost (using port 4430) from here we need to enter the connection information to Vmware (Hyper-V hosts are discovered automatically when they have the agent installed) Same with Veeam Backup servers as well.

5

After you have entered the connection info you will also get a header saying the recommended number of collector hosts.

7

After this is finished setup you can open the OpsMgr console. From here there is one final task that is needed. Which is to Configure the Health Service, this can be dome from tasks under _All_active_Alerts under VMware monitoring pane.

image

After this is done you need to expect atleast 15 min before data is populated into your OpsMgr servers, depending on the load. You can also view the events logs on the Opsmgr servers to see that data is correctly imported.

image

and after a while, voila!

I can for instance view info about storage usage

image

 

Vm information

image

Now I could show grafs and statistics all day but one of the cool stuff in this release, is the cloud capacity planning reports.

image

They allow it to see for instance how many virtual machines I would need in Azure (and what type) to move them there.

image

Setup Management pack for SQL 2012 with Operations Manager

Now Microsoft recently released their new management pack for SQL Server, which included new dashboard views for SQL administrators. Now this is nice and it gives a clear overview of the current configuration and what’s happening at the time. Now it supports most the SQL versions, except 2014 (yet…)

Version

32-bit SQL Server on 32-bit OS

32-bit SQL Server on 64-bit OS

64-bit SQL Server on 64-bit OS

SQL Server 2005

Supported

Supported*

Supported (for SQL Server 2005 SP2 or later only)

SQL Server 2008

Supported

Supported*

Supported

SQL Server 2008 R2

Supported

Supported*

Supported

SQL Server 2012

Supported

Supported*

Supported

Another heads up! is that by default Microsoft recommends that you monitor no more then 50 databases (per agent) to avoid spikes in the CPU usage. And also not all features are discovered and monitored by default

image

So in order to get monitoring on these features we would neeed to create en override on the different objects. This can be done by going into Authoring –> Management Pack –> Objects –> Object Discovery –> Scope –> (Choose SQL objects) right-click on the feature which is not enabled and choose Override –> choose a class for which you want this overide to work for. Then create a new management pack and choose Enabled == True

 image

Now the installation is quite simple, head to this webpage and download the management packs –> http://www.microsoft.com/en-us/download/details.aspx?id=10631

Download and then go into the Operations Manager Console –> Administration –> Management Packs –> Import

(NOTE: it is also an best-practice to import the Windows Server management packs first in order to properly monitor aspects such as disk, processor, memory, network)

And after we have imported the management packs we get alot of tasks related to SQL in the dashboard

image

In order to use these we need to install the SQL management studio on the Operations Manager server. After you have imported the managmeent pack you also need to attach accounts to the SQL profiles in order to have the proper access to the SQL servers.

This post from Kevin Holman (which is a couple of years old, still explains in good detail how to configure the profiles and accounts –> http://blogs.technet.com/b/kevinholman/archive/2010/09/08/configuring-run-as-accounts-and-profiles-in-r2-a-sql-management-pack-example.aspx)

Now after you have properly configured the managment pack with permissions, its time to do some proper tuning. For instance by default, the mangement pack will not inform about database backups, which I find a bit disturbing since this is something I would really want to monitor.

In order to enable this monitor go into Monitoring pane –> Microsoft SQL Server –> SQL Server Database Engines –> Databases –> Databases view (Then choose a random SQL database) right-click choose open –> Health explorer (Click the filter monitors to remove showing of unhealth objects) Then under Database backup status it does not have any info.

image

Now right-click on that object and choose override –> choose enabled true and enter a date here where it should check how old the database is.

image

After you have then stored this in a mangement pack and choosen which object this rule should apply to, Operations Manager should start to report back with the status if you have taken backup or not.

I can see that after enabling the rule, the alarm appears.

image

Now there are of course loads of different monitors, which will cause overhead to the SQL server and the operations manager. Many monitors are by default added because Microsoft thinks its a good idea, but your database admin or yourself might not agree. Important to remember that Microsoft have to create a baseline which applies to most setups but if you don’t agree you can customize to your hearts desire! Smilefjes

So I live by a rule with Operations Manager, if you don’t need to monitor it. Disable it!
Last but not least a preview of the new dashboards which comes with the new management pack

image

Dell VRTX and disk management

Dell introduced their new VRTX platform last year. For those who haven’t heard about VRTX it it was the start of fully-converged hardware solutions from Dell. Its a nifty piece of hardware which allows for up to 4 nodes and 25 harddrives to share midplane allowing for simultaions access to the storage.

No compromise on scalable performance

It is also possible to put some PCI-E cards in there as well

  • 8 flexible PCIe slots:
    • 3 full-height/full-length slots (150W) with double-wide card support (225W)
    • 5 low-profile/half-length slots (25W)

All within the same tower, making it an ideal solution with for remote sites/offices with need for a small powerhouse.

The VRTX comes with a custom made disk management solution, which operates differently unlike the other Dell software. So for instance if we need to create an Hyper-V cluster we would need to have shared storage in place, the disk setup is configured from the CMC.

image

(NOTE After we have configured the disks, it will not appear in the iDrac of the indivudual nodes, this is by design, but it will appear when setting up the OS)

first you need to go into Storage –> Virtual disks

(NOTE: make sure you have updated the PERC before creating the virtual disks!)

image

Create Virtual disks –> From here you need to select the number of disks you want to add to the virtual drive

image

Important to note here, In my case I want to create a RAID 10 set with one hot spare (Since RAID 10 has the best read-performance of the differnet raid levels not very dependant on a good controller). Ref https://publib.boulder.ibm.com/infocenter/eserver/v1r2/index.jsp?topic=%2Fdiricinfo%2Ffqy0_cselraid.html
First I create the RAID 10 set and define my storage policies. Important to note that if you have a newer Dell VRTX with dual PERC  you do not have the ability to do a write cache and you will not get the performance you want.

After you are done choose disks and policies choose Create Virtual disk. When the operations is complete you can go to manage and choose assign hot spare

image

Next we have to do the mapping, go back to the Storage pane. From here we need to assign it to multiple assigment

image

It is important to note that a virtual disk can only assigned to a virtual adapter, and a virtual adapter can only be asigned to ONE server.  From here you can see that my server slot mappings are by default assigned a virtual adapter.

Now go back  to the Virtual disks pane and choose Assign, from here we can assign the virtual disk to a virtual adapter. Here we choose full access.

image

Then choose Apply. Now when going back to the storage pane you can see the overview of the storage layout

image

Configuring Front-end optimization with Citrix Netscaler

One of the new features in Netscaler 10.5 is called Front-End optimization (which actually is part of Netscaler enterprise and +) which allows Netscaler to optimize the HTTP traffic which is headed back to the client. Now let us take a look at some of the different settings.

image

Now first of we have the JavaScript section.
* Make Inline (This makes JS which are linked to a page to become inline instead, only affects JS which are less then 2 KB)
* Minify (Removes Whitespaces and comments from JS)
* Move to end of body tag (Moves a inline Javascript to the end of a body tag

Images
* Shrink to attributes (Shrinks an image to the specified size as the HTML tag
* Make inline (This makes Images which are linked to a page to become inline instead, only affects images which are less then 2 KB)
* Optimize (Removes non-image data from JPEGs, such as comments)
* Convert GIF to PNG (converts images from GIF to PNG)
* Lazy Load (Downloads images as a user scrolls down to them)

CSS
* Make Inline (This makes CSS files which are linked to a page to become inline instead, only affects CSS files which are less then 2 KB)
* Combine (Converts multiple CSS files into one)
* Move to head tag (Moves CSS defined in the body tag to the head tag)
* Image inline (Makes such as CSS backgrounds referenced in the CSS file as inline)
* Convert Imports to Links (Convert CSS import statements to HTML link tags)
* Minify (Removes Whitespaces and comments from JS)

HTML
* Remove Comments from HTML (Removes comments within the HTML files)

Extend Page Cache (
Enable Client side Measurements

Now you can take a look at how HTML will look after it is parsed trough this feature here –> http://support.citrix.com/proddocs/topic/ns-optimization-10-5-map/ns-feo-working-use-case.html

Now that you have some understanding on what it does, let’s go ahead and configure it. First we need to enable the feature and Integrated caching (since this is a prerequisite)

Enable both features

image

Now by default there are some premade actions, which define what options are enabled. For intance aggresive policy have most of the optimizations enabled.

image

Now for instance, lets say that we have a prefined load balanced server (which in my case is hosting a WordPress site) the vServer is called WEB-IIS in my case, go into Front-End Optimization –> Policy Manager –>

Here choose bind point, and virtual server

image

Next we need to bind a policy to the bind point. Remember that here we need to create a policy using an expression and attach it to the bind point.

image

I used HTTP.REQ.HOSTNAME expression here so in my case when a user accesses demo-webopt the user will be affected by the policy.

After you have added the policy, press OK then DONE and you are good to go.

So try to access the page and watch the statistics.

Now we can see that it has already managed to do some optimization after I tried to access the page a couple of times.

image

So with this feature it allows web-developers to be able to comment inline code without affecting the users, also being able to have a solid structure on CSS and JS without affecting the performance. Note that this feature is not suitable for all web applications, be sure to properly test the feature first.

Netscaler 10.5 review

Now since the release of 10.5 I have been able to test alot of the new features in the latest release. Citrix has also released new versions of Insight and Endpoint clients for Windows & Mac to match the new release.

The upgrades have so far for my part have been non-problematic (in case of a custom GUI you may need to recreate it) from 9.3 and even 10.1 builds. For those that are in a migration plan please refer to the migration document from Citrix http://support.citrix.com/proddocs/topic/ns-faq-map-10-5/ns-faq-migration.html

I have also seen a performance increase in some scenarioes.

There has also been an update on the clustering features, which didn’t caught my eye at first. http://support.citrix.com/proddocs/topic/ns-system-10-map/ns-cluster-feat-supp-ref.html Which allows us to have a Netscaler Gateway vServer running on a local Netscaler node.

Now the new build is 99% pure HTML which is great! there are still some features which still requires JRE, but this is going to be fixed in a future release.

The following features or nodes still require JRE:

  • System
    • Upgrade Wizard
    • Diagnostics
    • User Administration
      • Command Policies
      • Command Policy RegEx Editor
  • Visualizers
    • Network > Network Visualizer
    • Network > TCP/IP connections
    • Traffic Management > Load Balancing > Visualizer
    • Traffic Management > Content Switching > Visualizer
    • Traffic Management > GSLB > Visualizer
  • Security
    • Application Firewall
      • Application Firewall wizard
      • Add/ Edit/ Import profiles
      • Signatures
        • Add
        • Update Version
        • Auto Update Settings

Citrix has also made easier integrations for their own products such as XenDesktop/XenMobile/Sharefile and so on, which makes it easier for consultants to deploy Netscaler solution to provide availability for other products.

Now all of the new features are listed here –> http://support.citrix.com/proddocs/topic/ns-rn-main-release-10-5-map/netscaler-10-5-rn.html

One thing which I find is the most important featue in the latest build (besides the new GUI) is the front-end optimization feature which allows the Netscaler to reduce load and render times on web pages which are rendered on a client browser, after some intials tests with this feature I was able to save 60% of the load time. Since in most cases a web site is not optimized for speed, and therefore Netscaler might be an important piece there.

But to sum it up so far, I’m really impressed with the latest release and how Citrix has made Netscaler even more powerful with more then 100 more features, and makes it a more key component in most datacenters. Looking forward to the later releases to see what Citrix has up their sleeve! Smilefjes som blunker

Task sequences best practices Configuration Manager 2012

Microsoft has recently released a document series which contain best practices for task sequences with Configuration Manager 2012 R2. This series goes trough a step-by-step in all the different steps in a task sequence.

For all IT-pros working with task sequences I suggest taking a look at the document series here –> http://www.microsoft.com/en-us/download/details.aspx?id=43412

In other related news, my fellow MVP Kent Agerlund has released a new book (updated) for Configuration Manager 2012 R2 master the fundamentals, which you also should take a look at if you want to learn more –> http://www.amazon.co.uk/System-Center-2012-Configuration-Manager/dp/9187445085/ref=sr_1_1?ie=UTF8&qid=1404503771&sr=8-1&keywords=kent+agerlund

Netscaler Insight with Appflow and Session reliability

A couple of days ago I was involved in a case where ICA sessions were suddenly disconnected and the users were unable to reconnect. The setup was a simple ICA-proxy access gateway using the latest build (126) and there were no error messages on the Storefront server.

After involving Citrix support they recommended that we disable AppFlow for the access gateway (since this deployment used Netscaler Insight to monitor ICA sessions) then suddenly things started to work again.

Now I knew that I’ve seen this issue before somewhere on twitter, a quick tweet discovered that someone else has seen the issue as well.

image

So apparently using Appflow with session reliability is a NO-GO!

If someone has managed to test this with 10.5 please give me some feedback if this has been fixed!

Netscaler 10.5 what’s in it

So there has been some fuzz regarding the latest release of Netscaler 10.5 (also codename Tagma) which should been the death of Java GUI within Netscaler. Not quite there yet..

So what has Citrix improved / added in this feature ? Well it is quite a lot. Citrix states that they have added over 100 new features in this release. Beta 1 has just been released to partners, and beta 2 is on its way which is coming mid may.

http://www.citrix.com/tv/#videos/10995

In the later betaes which are coming there are coming more templates to App and load balancing. But let us focus on the news that’s arrived now.

  • HTML5 based GUI
  • NITRO SDK for Python
  • NITRO for File Operations
  • NITRO for ZebOS system
  • GSLB Static proximity sync
  • SSL configuration Profiles
  • CNAME record caching
  • Multiple Port CS
  • AAA Session Stickiness
  • Kerberos Performance
  • Jumbo Frames
  • Link Redundancy
  • TCP BIC and CUBIC
  • SPDYv3 Gateway
  • SDX Manageability
  • Front End Optimization
  • Insight Center Enhancements

First of is the GUI which is now mostly pure HTML 5 which makes it quite snappy! I would say that about 80% of the GUI is now HTML 5, some features such as running trace still uses Java (Im guessing this is something that is going to get fixed in a later release.

image

So what is new under licensing part ? We can see that there are some new features which appear here, such as Integrated Disk caching and RISE (Which is part of the Cisco platform)

image

There is also two new “features” within Traffic optimization

image

* Front End Optimization (Which converts data which is being sent back to the user, such as convert image files)

image

And we have content accelerator (Which is used for integration with Citrix ByteMobile)

Also setting up a new Netscaler Gateway is also alot easier since we don’t need the Java part anymore.

image

Also support for LLDP is here, which is a information exchange protocol kinda like CDP (from Cisco) So here is a comparison between the old GUI and the new GUI

8-lldp

There is also a list of new monitors which are built-in

image

Also support for LACP on interfaces which allows you to team NICs.

11

12

Citrix has also added some basic wizards which allow for easier setup against XenDesktop / Sharefile and such.

13

Also SSL profiles and DTLS profiles

15

We also have support for Jumbo Frames which allows for up to 9000 bytes of payload instead of 1500.

And one thing that is missing is Edgesight monitoring from Netscaler which looks like it has been removed for good. One thing which I didn’t find in the beta but is mentioned in the video is support for Oracle (which most likely coming in a later beta) o this is just my findings in the latest Beta. ill update when the next beta is coming! Looks like we have much to look forward to! Smilefjes

Dell and Nutanix, the biggest news since sliced bread

Well the title might be a bit misleading but its true! For those who haven’t seen the news yet, you can read about it here –> http://www.dell.com/learn/us/en/uscorp1/press-releases/2014-06-24-dell-software-defined-storage-portfolio

Now a little background story here, Nutanix has been in the market with its hardware based appliances in about a year/two years or so now. The appliances are running the Nutanix OS and where the magic happens. Now I’ve talked to some customers who swear to having hardware from the regular hardware provides since its familiar and you know how to handle the warranty/support/upgrades and so on, and therefore Nutanix might not be a good match for them.

Now as of today, Dell and Nutanix announced a strategic partnership which will allow Dell to ship their Dell XC Series of Web-scale Converged Appliances, which combine compute, storage and networking and with the Nutanix software on the top which allows for the best of both worlds.

For those that have read the Gartner report which was recently published “Magic Quadrant for Integrated Systems” both Dell and Nutanix are listed in the quadrant and have the focus areas and with this new partnership they will fullfill each other “weak” spots and both come out stronger from this.

So make sure to follow this space going forward!

Azure Active Directory Premium preview

So as of today, Azure Active Directory Premium is available in trial for all users. For those that aren’t aware of what Azure Active Directory Premium is in short Identity and Access Management for the cloud so its a extension of the previous features which include,

* custom domains

* users and groups

* directory integration with local Active Directory

* MFA (which I have blogged about previously http://bit.ly/1lkQ0NO)

The premium part allows for single-sign and multi-factor authentication to any cloud application. To show the entire functionality.

Active Directory Premium edition is a paid offering of Azure AD and includes the following features:

  • Company branding – To make the end user experience even better, you can add your company logo and color schemes to your organization’s Sign In and Access Panel pages. Once you’ve added your logo, you also have the option to add localized versions of the logo for different languages and locales. For more information, see Add company branding to your Sign In and Access Panel pages.
  • Group-based application access – Use groups to provision users and assign user access in bulk to over 1800 SaaS applications. These groups can either be created solely in the cloud or you can leverage existing groups that have been synced in from your on-premises Active Directory. For more information, see Assign access for a group to a SaaS application.
  • Self-service password reset – Azure has always provided self-service password reset for directory administrators. With Azure AD Premium, you can now further reduce helpdesk calls whenever your users forget their password by giving all users in your directory the capability to reset their password using the same sign in experience they have for Office 365. For more information, seeSelf-service password reset for users.
  • Self-service group management – Azure AD Premium simplifies day-to-day administration of groups by enabling users to create groups, request access to other groups, delegate group ownership so others can approve requests and maintain their group’s memberships. For more information, see Self-service group management for users.
  • Advanced security reports and alerts – Monitor and protect access to your cloud applications by viewing detailed logs showing more advanced anomalies and inconsistent access pattern reports. Advanced reports are machine learning-based and can help you gain new insights to improve access security and respond to potential threats. For more information, see View your access and usage reports.
  • Multi-Factor Authentication – Multi-Factor Authentication is now included with Premium and can help you to secure access to on-premises applications (VPN, RADIUS, etc.), Azure, Microsoft Online Services like Office 365 and Dynamics CRM Online, and over 1200 Non-MS Cloud services preintegrated with Azure AD. Simply enable Multi-Factor Authentication for Azure AD identities, and users will be prompted to set up additional verification the next time they sign in. For more information, see Adding Multi-Factor Authentication to Azure Active Directory.
  • Forefront Identity Manager (FIM) – Premium comes with the option to grant rights to use a FIM server (and CALs) in your on-premises network to support any combination of Hybrid Identity solutions. This is a great option if you have a variation of on-premises directories and databases that you want to sync directly to Azure AD. There is no limit on the number of FIM servers you can use, however, FIM CALs are granted based on the allocation of an Azure AD premium user license. For more information, see Deploy FIM 2010 R2.
  • Enterprise SLA of 99.9% – We guarantee at least 99.9% availability of the Azure Active Directory Premium service. For more information, see Active Directory Premium SLA
  • More features coming soon – The following premium features are currently in public preview and will be added soon:
    • Password reset with write-back to on-premises directories
    • Azure AD Sync bi-directional synchronization
    • Azure AD Application Proxy

Now in order to activate premium in your azure account you need to have an existing directory service in place, then you can go into the directory and then create a premium trial

image

Then you have to activate the trial.

image

After premium is enabled you have to license users to use the feature. In the trial we are given 100 licenses which we can use.

image

But note that now we have other panes here as well that we can use to configure the single-sign on experience. Now in an ideal scenario we would have a Active Directory catalog synced and with a public domain which is verified, i’m in vacation mode so therefore im going to show how to use a cloud only user and setup SSO to different cloud applications.

If we go into users we can see all the users which are located in the cloud directory, either they are synced from a local AD or they are a Microsoft account.

image

So we have some users in place, if we go into Configure pane we have the option to customize the access page which users are using to use SSO to web applications. We also have the option to enable users to do password reset (NOTE: that this requires that users have either a phone or alternative email adress defined) this can also me combined with password write back to on-premises AD. http://msdn.microsoft.com/en-us/library/azure/dn688249.aspx

Now we want to add some SaaS applications for the test, go into applications and choose add.
There are 3 ways to add an application. Either add a an regular web application or a native client application, choosing a application from the gallery (which atm consists of over 1000 different SaaS applications. Or if we want to publish an internal application outside of our network (this uses Microsoft Azure AD Application Proxy)

image

So in our case we are going to choose applicaiton from the gallery. Now I have already added some applications to the list here, and some appliactions have different capabilities then others. For instance Salesforce application has the capabilities for provisioning users automatically after a dirsync for instance, while twitter or Yammer do not have this capability.

image 

There are also two types of SSO for each applications, we can either use ADFS (federation based SSO) or use Password based SSO.

Important to note that password based SSO is when a user click on a application from the access portal and has a plug-in installed which then populates the username and password field of the application when entering, it also has some requirements.

Configuring password-based single sign-on enables the users in your organization to be automatically signed in to a third-party SaaS application by Windows Azure AD using the user account information from the third-party SaaS application. When you enable this feature, Windows Azure AD collects and securely stores the user account information and the related password.

Password-based SSO relies on a browser extension to securely retrieve the application and user specific information from Windows Azure AD and apply it to the service. Most third-party SaaS applications that are supported by Windows Azure AD support this feature.

For password-based SSO, the end user’s browsers can be:

  • IE 8, IE9 and IE10 on Windows 7 or later
  • Chrome on Windows 7 or later or MacOS X or later

Now if I again go back to the application list and click on an application I have usually two options. Defining SSO options and choosing who has access.

image

NOTE: for salesforce I have the ability to configure automatic user provisioning as well.

image

Now go into assign users and choose an user in the directory. Now when using password based SSO you get the option of entereting the credentials on behaf of the users (now they are also able to enter this information on the access portal)

image

After this is done and you have assigned users to different applications they can open the access portal (which can be found here –> http://myapps.microsoft.com ) After I login here with my username I am able to SSO to the application I click on from the portal (NOTE that this requires a browser plug-in installed) Microsoft has also already created an wiki containing best-practices for accessing SSO applications.

image

And voila, I have my personal little password manager. From a user perspective I have the option to change credentials from this portal I can also change my password for my main user (which is a outlook user in this scenario) But this is a huge step in how to manage access to users and applications with a little touch of the cloud.

Følg

Få nye innlegg levert til din innboks.

Bli med 38 andre følgere