Earlier this week, Microsoft released a preview of the DNS services (Finally!) Which allow us to manage DNS zones from within Azure. Which is something that their competition Amazon has had for quite some time. Now since this in preview it is only able to manage it from PowerShell. After speaking with the PM for the product, I also heard that some of the capabilities that will come is
- Integration with Traffic Manager (Think of the GSLB capabilities!)
- DNSSEC support
- Management via the Azure portal
- Merge with the Office365 capabilities as well (Since you can add your own domain there)
Now everyone can sign up for the preview via http://azure.microsoft.com/en-us/services/dns/
Important to remember that using this service means that the Azure Nameservers become authoritative for your domain. But before we do that we need to register our domain at a registrar first and then delegate the NS to Azure (ill show you how to do that later.
Now to get started, in order to be able to try out Azure DNS you need to be using the resourcemanager cmdlets.
Switch-AzureMode -Name AzureResourceManager
Now we have done this we have access to the DNS cmdlets. The DNS service requires a resource group first so we need to create one. Use the
New-AzureResourceGroup –Name Something –location “Somewhere” might be “west us” for instance.
Then we have to add the network provider by running the command Register-AzureProvider –ProviderNameSpace Microsoft.Network
Next we add the DNS provider to the cmdlets. Register-AzureProviderFeature –ProviderNamespace Microsoft.Network –FeatureName azurednspreview
Now that we have registrered we can create a DNS zone within the resource group.
By running the commands New-AzureDNSZone –name nameofdomain.com –ResourceGroupName something
If we now get information about the Zone we can also get the nameserver information we need to be able to move the NS at our registrar. By default when creating a zone it does as always create a SOA and NS record. So when we need to add a record to the zone.
Get-AzureDNSRecorSet –ZoneName domainname –Name www –RecordType A –resourcegroup myazureresourcegroup Add-AzureDNSRecordConfig –Ipv4Address “18.104.22.168” | Set-AzureDNSRecordSet
I can now see that my A record is added to my domain zone
Now since I havent moved my DNS zone I can only verify this by doing a nslookup directly to the Azure DNS servers. And we are good to go!
So this is my recap on what has happend at Ignite, sorted by subject of course but the focus and strategy at Microsoft is clear! “MOVE TO OUR CLOUD” of course they did not leave out the guys on the floor as well.
Microsoft announced numerous changes to their Azure platform, including more of an architechtural change to their IaaS platform (Which is due time) so to sum up Azure changes happening over the last two weeks.
- User defined routes (Which allow us finally define a routing table for each subnet)
- Reserved IP addresses (Allow us to move reserved IP addresses between services now!)
- Instance level public IP
- Multiple VIPs per Cloud Service
- Azure DNS (Which allows us to manage our DNS zones from Azure, whic also will eventually support DNSSEC and integrate with Traffic Manager)
- Networking support for resource manager
- Bring in BGP routes if you are using ExpressRoute
- 16 vNICs pr virtual machine
- Azure Automation with support for Graphical Authoring and integration with on-premises
- Azure Resource Manager which will allos us to build total services based upon JSON files, this will also play a huge role in Azure Stack
- IP forwarding on virtual appliances
- Announced a bunch of different virtual appliance partners which will arrive in the marketplace soon (For instance Citrix Netscaler, CheckPoint and so on)
- Role Based Access
- Exchange supported on Premium Storage in Azure
So as you can see there is much on Azure happening, specifically on networking which has been lacking for quite some time. So what about Office365 and EMS?
- Sway (Will be available to all later this month)
- New Office2016 Public Preview
- Skype for Buisness Broadcast meetings
- Announced one Sync client for OneDrive
- Mobile offline files IOS and Android OneDrive
- Save to OneDrive from OWA
- 20,000 file limit and 10GB max file site will be gone
- You can see more about the OneDrive Roadmap here http://www.zdnet.com/article/microsoft-fills-in-onedrive-roadmap-dates-details/)
- Intune announced support for Mac OSX
- Intune app wrapping for Android
- Support for Apple Volume Purchage Program
- Support for MAM in Outlook app
- Restrict Access to Outlook based upon compliance of device
- Windows 10 support for Intune
- Document Tracking with Azure RMS
- Cloud App Discovery GA
- Priviliged Identity Managment
- Also heard that eventually Intune will merge into Azure Active Directory
Other then these news Microsoft also announced a new bundle which is called OMS (Operations Management Suite) which consists of
- Azure Automation
- Azure Backup
- Azure Site Recovery
- Azure Operational Insights ( Which will later get support for components like networking logging, syslog tracking and CMDB options.
This suite can be tried now! Microsoft also announced that they will be opening for partners to add their own intelligence packs for their own monitoring solutions. Which means that more data moving to the cloud.
So what did Microsoft annonunce for the guys on the floor ? Well alot! For instance a lot of new capabiliteis in Server 2016.
- Microsoft Advanced Threat Analytics (Which is currently in preview is a combination of networking and log based monitoring to be able to detect attacks like Pass the Hash, accounts that have been comprimised and so on) This will become more advanced with capabilities like networking monitoring and be able to take action if there is an attack.
- PowerShell DSC support for Linux (Which just came out of nowhere!)
- Nano Server (Which is a newly created fashin of Windows Server, which is designed for delivering the next generation cloud services with a very low footprint in terms of RAM, DISK and CPU where Microsoft stripped most of the tradisional solutions away. ill be writing more about Nano Server but it essence it now looks more like ESX.
- Containers, Containers, Containers! (Also something I will be writing more about)
- Storage Spaced Direct (Shared Nothing File Cluster can also be combined with Hyper-V to deliver HCI)
- Storage Replica which is not like DFS-R.. Which allow us to Async or Syncronous replicate any volume.
- Storage QoS on a scale out file server
- Windows Defender not installed and enabled by default (even i Nano)
- Rolling Cluster Upgrades
- RDS support for OpenGL 4.4, OpenCL 1.1 + Support for GEN2 VMs and RemoteFX,
- Web Application Proxy, preauth for HTTP Basic, HTTP to HTTPS redirect
- Windows Server 2016 will support VXLAN
- Software loadbalancing capabilities
- Production Checkpoints and integration with VSS
- Linux SecureBoot
- Connected Standby
- Hyper-V manager and alternate Credetials
- ReFS more used in centralized SOFS
- Binary virtual machine configuration VMCX
- Hot Add and remove of memory and network adapters
- SMB 3.1.1 (Pre authentication integrity check, encryption improvements,
- The Network Controller which will allow central management of virtual and physical network devices
- Shielded VMs and Host Guardian Service
- JEA (Just Enough Administration
- Converged NIC across tenant and RDMA traffic
- Server Side Support for HTTP/2 including header compression and connecrtion multiplexing on IIS
- Online Resizing support for Shared VDHX
- PowerShell Direct to a virtual machine.
Now with all these capabilities in place in the fabric, there is only missing one thing. Which is something they announced in the Keynote which is Azure Stack, now Microsoft means buisness. They are moving in and competing with the likes of OpenStack and Cloudplatform and so on. Now many wondered if this was the new version of Azure Pack ( and it its! its the evolution of Azure Pack) Microsoft will continue to support Azure Pack for a while but the main development will be into Azure Stack. Now unlike Azure Pack, Stack is not so deeply dependant on System Center. Now of course you would still use this to manage the infrastructure, but the fabric connection between Azure Stack Providers would be against Hyper-V or clusters.
The Azure Stack will consist of an Azure like fabric controller and will also have the option to communicate with the network controller to manage the fysical and virtual network layer. Stack will also look and feel like the new portal which is currently in use in the preview portal and will come with a set of different provides to deliver specific services.
With the support of VXLAN in the fabric and some support for Vmware with DPM maybe Microsoft is moving with the Azure Stack and support for Vmware ?
Time will tell, and stay tuned for more.
even thou the Wireless Isnt completely reliable I will try to maintain the flow as much as I canm, even thou it might get published later. (I will have to be honest the Wifi is horrible) they havent planned properly, (Cisco based)…
The keynote hall opened around 8 AM, and on the stage Microsoft even had a in-house DJ playing @joeysnow
Now the keynote starts at 9AM, wiere there is expected alot of new stuff to be released. Some of the news will just be recap on what happend at @MSbuild and also just some stuff around.
Just got confirmed that there are 23.000 atendees present at MSIgnite and they are live streaming all of the sessions live! (The keynote hall has 15.000 seats)
First announcement from Satya:
Windows Update for Business. Whch he didnt say so much about. (Technet blog on it here — http://t.co/daQ6lLBng4 )
Office2016 new public preview http://blogs.office.com/2015/05/04/office-2016-public-preview-now-available/
Skyoe for Buisness broadcasting
Office Delve Organizational analytics.
Windows Server and System Center 2016 https://technet.microsoft.com/en-us/subscriptions/downloads/?FileId=63651&utm_source=dlvr.it&utm_medium=twitter
Whats new in System Center Configuration Manager https://technet.microsoft.com/library/dn965439.aspx on-prem MDM YAY!
SQL Server 2016 (Preview later today), with streach it to Azure) http://blogs.technet.com/b/dataplatforminsider/archive/2015/05/04/sql-server-2016-public-preview-coming-this-summer.aspx
Azure Stack (A new release of Azure Pack) http://blogs.technet.com/b/server-cloud/archive/2015/05/04/microsoft-brings-azure-to-the-datacenter-for-the-next-generation-of-hybrid-cloud.aspx Public Preview coming this summer.
Operations Management Suite (One consitent IT control plane, in the same lines of Azure EMS) http://www.microsoft.com/en-us/server-cloud/operations-management-suite/default.aspx?WT.mc_id=Blog_ServerCloud_Announce_TTD
Advanced Threat Analytics (Microsoft entering the security field again, Which is going to integrated within AD to see authentication logs. (Guessing its going to be like Audit Collector Service in System Center) more info about the EMS part here — http://blogs.technet.com/b/enterprisemobility/archive/2015/05/04/ignite-microsofts-next-chapter-in-enterprise-mobility.aspx
Windows 10 and Device Guard which is a more and better integrated AppLocker.
Outlook MAM enabled, and Skype for buisness enabled MAM is coming in Q3
Data leackage in Windows 10 with integrated file encryption.
Document traching site for Azure RMS. Which gives us the ability to see who has opened specific documents.
Azure AD leaked credential rolling out over the next couple of weeks. With also the om-premise which I will be trying out later today.
Microsoft also announced Azure DNS http://azure.microsoft.com/en-in/services/dns/
So alot of stuff that was announced today looking forward to trying it out.
The big day is here, which I know many have been waiting for. Since the release of Vmware ESX 6 many have been waiting for an update for Veeam to be able to upgrade to ESX 6.
The update is now available and can be uploaded here –> http://www.veeam.com/blog/veeam-availability-suite-update-2-vsphere-6-support-endpoint-and-more.html
Now this patch also includes integration with Endpoint Backup and support for a bunch of the ESX 6 features such as
- Support for VMware Virtual Volumes (VVols) and VMware Virtual SAN 2.0
- Storage Policy-Based Management (SPBM) policy backup and restore
- Backup and replication of Fault Tolerant (FT) VMs
- vSphere 6 tags integration
- Cross-vCenter vMotion awareness
- Quick Migration to VVols
- Hot-Add transport mode of SATA virtual disks
And now we can monitor VVOL with Veeam one as well, I have more news about Veeam coming in the horizon stay tuned!
So recently I just purchased the Razer Seiren for my home enviroment, going to be used for elearning purposes. But one problem was that it was not working on the Windows 10 tech preview. If I looked into device manager I saw that it was an unknown device.
Razer synapse didnt even try to install the device. But you can see the device drivers under the folder C:\Program Files (x86)\Razer from there you can install the drivers from the 8.1 folder.
Veeam Endpoint backup free, which allows for backup of physical computer and physical servers which I have blogged about earlier https://msandbu.wordpress.com/2015/03/06/veeam-endpoint-backup-and-integration-with-veeam-br/
Has now become generally available from Veeam!
This can also be integrated with existing Veeam repositories which enabled us to do physical backup from Endpoint backup to a Veeam infrastructure!
This post, is based upon a session I had for a partner in Norway. How can we use Netscaler to optimize web content?
Let’s face it, the trends are chaging
* Users are becoming less patient (meaging that they demand that applications/services respond quicker. (more then 40% of users drop out if the website takes mroe then 5 – 10 seconds to load) think about how that can affect a WebShop or eCommerce site ?
* More and more mobile traffic (Mobile phones, ipads, laptops. Communicating using 3G/4G or WLAN for that matter) and to that we can add that there is more data being sent across the network as well. Site web applications become more and more complex, with more code and more components as well.
* More demands to availability (Users are demaing that services are available at almost every hour. If we think about it about 5 – 10 years ago, if something was down for about 10 min we didn’t think that much about it, but now ?
* More demands to have secure communication. It wasn’t that long ago that Facebook and Google switched to SSL as default when using their services. With more and more hacking attempts happening online it requires a certain amount of security.
So what can Netscaler do in this equation ?
* Optimizing content with Front-end optimization, Caching and Compression
With the latest 10.5 release, Citrix has made a good jump into web content optimization. With features like lazy loading of images, HTML comment removal, minify JS and inline CSS. And adding it that after content is being optimized the content can be compressed using gZIP or DEFLATE and sent across the wire (NOTE: that most web servers like Apache and IIS support GZIP and Deflate but it is much more efficient to do this on a dedicated ADC)
And with using Caching to store often accessed data it makes the Netscaler a good web optimization platform.
* Optimizing based upon endpoints.
With the current trend and more users connecting using mobile devices which are using the internett with a wireless conenction. If needs a better way to communicate as well. A god example here is TCP congeston. On a wireless you have a higher amount of packet loss and this requires using for instance TCP Congestion Westwood which is much better suites on wireless connections. Also using features like MTCP (on supported devices) allows for higher troughput as well. And that we can place different TCP settings on different services makes it much more agile.
* High availability
Using features like load balancing and GSLB allows us to deliver a high availabilty and scale solution. And using features like AppQOE to allows us to prioritize traffic in a eCommerce setting might be a valuable asset. Think the scenario if we have a web shop, where most of our buying customers come from a regular PC while most mobile users that are connecting are mostly checking the latest offers. If we ever where to reach our peak in traffic it is useful to prioritize traffic based upon endpoint connecting.
* Secure content
With Netscaler it allows us to create specific SSL profile which we can attack to different services. For instance older applications which are used by everyone might not have the high requirement regarding security, but on the other hand PCI-DSS requires a high level of security. Add to the mix that we can handle many common DDoS attacks on TCP level and on HTTP. We can also use Application firewall which handles many application based attacks, when an own learning feature it can block users which are not following the common user pattern on a website. And that we can specify common URLs which users are not allowed to access.
So to summerize, the Netscaler can be a good component to optimizing and securing traffic, with a lot of exiting stuff happening in the next year! stay tuned.
So this is part two of my securing XenApp enviroment, this time I’ve moved my focus to Storefront. Now how does Storefront need to be secured ?
In most cases, Storefront is the aggregator that allows clients to connect to a citrix infrastructure. Im most cases the Storefront is located on the internal network and the Netscaler is placed in DMZ. Even if Storefront is located on the internal network and the firewall and Netscaler does alot of the security work, there are still things that need to be take care of on the Storefront.
In many cases many users also connect to the Storefront directly if they are connected to the internal network. Then they are just bypassing the Netscaler. But since Storefront is a Windows Server there are alot of things to think about.
So where to begin.
1: Setting up a base URL with a HTTPS certificate (if you are using a internal signed certificate make sure that you have a proper set up Root CA which in most cases should be offline. Or that you have a public signed third party CA. Which also in many cases is useful because if users are connecting directly to Storefront their computers might not regonize the internally signed CA.
2: Remove the HTTP binding on the IIS site. To avoid HTTP requests.
Use a tool like IIS crypto to disable the use of older SSL protocols on IIS server and older RC ciphers
You can also define ICA file signing. This allows for Citrix Receiver clients which support signed ICA files to verify that the ICA fiels they get comes from a verified source. http://support.citrix.com/proddocs/topic/dws-storefront-25/dws-configure-conf-ica.html
3: We can also setup so that Citrix Receiver is unable to caching password, this can be done by changing authenticate.aspx under C:\inetpub\wwwroot\Citrix\Authentication\Views\ExplicitForms\
and you change the following parameter
4: Force ICA connections to go trough Netscaler using Optimal Gateway feature of Storefront –> http://support.citrix.com/article/CTX200129 using this option will also allow you to use Insight to monitor clients connection to Citrix as well, and depending on the Netscaler version give you some historical data.
And with using Windows pass-trough you can have Kerberos authenticating to the Storefront and then have ICA sessions go trough the Netscaler –> http://support.citrix.com/article/CTX133982
5: Use SSL in communication with the delivery controllers –> http://support.citrix.com/proddocs/topic/xendesktop-7/cds-mng-cntrlr-ssl.html
6: Install Dynamic IP restrictions on the IIS server, this stops DDoS happning against Storefront from the same IP-address
7: Windows updated! and Antivirus software running (Note that having Windows updated, having some sort of antivirus running with limited access to the server) also let the Windows Firewall keep runnign and only open the necessery ports to allow communication with AD, Delivery Controllers and with Netscaler.
8: Define audit policies to log (Credential validation, Remote Desktop connections, terminal logons and so on) https://technet.microsoft.com/en-us/library/dn319056.aspx
9: Use the Storefront Web Config GUI from Citrix to define lockout and session timeout values
10: Use a tool like Operations Manager with for instance ComTrade to monitor the Storefront Instances. Or just the IIS management pack for IIS, this gives some good insight on how the IIS server is operating.
11: Make sure that full logging is enabled on the IIS server site.
Stay tuned for more, next part is the delivery controllers and the VDA agents.
With Netscaler 10.5, Citrix announced the support for SAML Identity Provider on the Netscaler feature. That basically meant that we could in theory use the Netscaler as an identity provider for Office365 / Azure AD. Now I have been trying to reverse engineering the setup since Citrix hasen’t created any documentation regarding the setup.
But now! Citrix recently announced the setup of Netscaler iDP setup for Office365 setup http://support.citrix.com/article/CTX200818
on another part Citrix also released a new build of Netscaler VPX (build 56.12) which fixes the CPU utilization bug on Vmware you can see more about the release note here –> http://support.citrix.com/article/CTX200818
And there is also a new PCI DSS report which shows compliance for version 3.
So a couple of days ago, Microsoft announced the preview for site recovery for physical and Vmware servers. Luckily enough I was able to get access to the preview pretty early. Now for those who don’t know the site recovery feature is built upon the InMage Scout suite that Microsoft purchased a while back. About 6 months back, Microsoft annouced the Migration Accelerator suite which was the first Microsoft branding of InMage but now they have built it into the Microsoft Azure portal, but the architecture is still the same. So this blog will explain how the the different components operate and how it works and how to set it up.
Now there are three different components for a on-premise to Azure replication of virtual machines. There is the
* Configuration Server (Which is this case is Azure VM which is used for centralized management)
* Master Target (User as a repository and for retention, recives the replicated data)
* Process Server (This is the on-premise server which actually does the data moving. It caches data, compresses and encrypts it using a passphrase we create and moves the data to the master target which in turn moves it to Azure.
Now when connecting this to a on-premise site the Process Server will push install the InMage agent on every virtual machines that it want to protect. The InMage agent will then do a VSS snapshot and move the data to the Process Server which will in turn replicate the data to the master target.
So when you get access to the preview, create a new site recovery vault
In the dashboard you now have the option to choose between On-premise site with Vmware and Physical computer to Azure
First we have to deploy the configuration server which the managment plane in Azure. So if we click Deploy Configuration Server this wizard will appear which has a custom image which is uses to deploy a Configuration Server
This will automatically create an A3 instance, running a custom image (note it might take some time before in appers in the virtual machine pane in Azure) You can look in the jobs pane of the recovery vault what the status is
When it is done you can go into the virtual machine pane and connect to the Configuration Manager server using RDP. When in the virtual machine run the setup which is located on the desktop
When setting up the Confguration Manager component it requires the vault registration key (Which is downloadable from the Site Recovery dashboard)
Note when the configuration manager server component is finished innstalling it wil present you with a passphrase. COPY IT!! Since you will use it to connect the other components.
Now when this is done the server should appear in the Site Recovery under servers as a configuration manager server
Next we need to deploy a master target server. This will also deploy in Azure (and will be a A4 machine with a lot of disk capaticy
(The virtual machine will have an R: drive where it stores retention data) it is about 1TB large.
The same goes here, it will generate a virtual machine which will eventually appear in the virtual machine pane in Azure, when it is done connect to it using RDP, it will start a presetup which will generate a certificate which allows for the Process serer to connect to it using HTTPS
Then when running the wizard it will ask for the IP-address (internal on the same vNet) for the configuration manager server and the passphrase. In my case I had the configuration manager server on 10.0.0.10 and the master server on 10.0.0.15. After the master server is finished deployed take note of the VIP and the endpoints which are attached to it.
Now that we are done with the Azure parts of it we need to install a process server. Download the bits from the azure dashboard and install it on a Windows Server (which has access to vCenter)
Enter the VIP of the Cloud service and don’t change the port. Also we need to enter the passphrase which was generated on the Configuration Manager server.
Now after the installastion is complete it will ask you to download the Vmware CLI binares from Vmware
Now this is for 5.1 (but I tested it against a vSphere 5.5 vcenter and it worked fine) the only pieces it uses the CLI binaries for are to discover virtual machines on vCenter. Rest of the job is using agents on the virtual machines.
Now that we are done with the seperate components they should appear in the Azure portal. Go into the recovery vault, servers –> Configuration manager server and click on it and properties.
Next we need to add a vCenter server from the server dashboard.
Add the credentials and IP-adress and choose what Process Server is to be used to connect to the on-premise vCenter server.
After that is done and the vCenter appears under servers and connected you can create a protection group (and then we add virtual machines to it)
Specify the thresholds and retention time for the virtual machines that are going to be in the protection group.
Next we we need to add virtual machines to the group
Then choose from vCenter what virtual machines to want to protect
Then you need to specify which resources are going to be used to repllicate the target VM to Azure
And of course administrator credentials to remote push the InMage mobility agent to the VM
After that the replication will begin
And you can see that on the virtual machine that the InMage agent is being installed.
And note that the replication might take some time depending on the type of bandwidth available.