So you are trying to load balance a feature which requires a clients to be redirected to the same backend host using mulitple protocols. For instance an ecommerce site, might allow you to add stuff to the shopping cart using HTTP and then when you want to sign in order to purcahse you need to switch over to HTTPS. And during this process you want the persistency maintaned since data is then locally on the webserver.
Another things is RDP. Now even thou RDP works fine with just TCP 3389 it also uses UDP 3389 for delivering bitmap transports. Vmware View also uses TCP/UDP combination for such a thing with its PCoIP. Now in order to accomedate for this we have Persistency Groups in Azure. By default we can only load balance for a single port at a time.
Now in this scenario we have to LB VIP servers, 1 server which responds on port 80 and another on port 8080. (Which responds on two different VIPs) but underneath these are services hosted on the same server.
After I’ve created these I have to setup a persistency group. Which is under the same load balancing tab. Then I have to give it a name, choose persistency type (I have only two options here, either source IP or cookie insert) then I have to choose what kind of vServers are to be placed in the same group.
Now there aren’t many ways to show if a Persistency Group is actually working, but if you go to the traffic management pane and click on “virtual server persistence sessions” it will show what sessions is attached to the persistency group.
ecommerece is here not a virtual server but represents the persistency group I just created earlier.
Microsoft released Azure RemoteApp december last year, and there has been alot of speculation on what Azure RemoteApp actually is. Now it has been in preview for a while and I have been able to test drive it for a long time. So therefore I’m going to tell you Azure RemoteApp is and what it is not.
First of Azure RemoteApp is RDS RemoteApp as a service from Azure. Meaning that you get access to your applications using RDP (sorry no full desktop access)…
Most of all the modern platforms have a RDP client which can be used to access Azure RemoteApp. Which is simply RDP beneath, but Microsoft added some extra bits to handle Azure AD Authentication among other things.
All users who access applications via Azure RemoteApp are given a user profile disk which is 50 GB which MUST be used to store data. Unlike a regular RDS deployment, Azure RemoteApp servers are stateless and might be deleted/removed for instance during patching. Therefore it is important to use this User profile disk or other storage options like OneDrive/Dropbox etc..
Now the problem with deployments being stateless is that you cannot setup solutions like ERP/CRM applications which requires SQL Databases stored backend. Another issue is that you cannot yet integrate an existing IaaS vNet in Azure with Azure RemoteApp. The only way is to setup a 2S2 VPN between the two deployments.
In order to deploy our custom LOB applications to Azure RemoteApp I would need to create a custom VHD containing 2012 R2 with RDS session host installed with my apps. Then I need to upload this VHD to Azure and then Microsoft will use that as an golden image to provisiong virtual machines.
So it seems to be a bit difficult to use Azure RemoteApp for all LOB applications, so what are its use cases ? After alot of speaking with partners and other techies I have a couple of pointers
* Applications which are barely used (Given the nature of Azure and pay-as-you-go) and are self-contained (This could save alot of money)
* Access to Office ProPlus (Given that you have customers which has ProPlus licenses in their subscription)
* Web based applications which requires Internet Explorer (Alot of Mac users out there which requires access to corporate applications which run only on IE)
* Applications where the usage fluxates (given the scale up ability of Azure RemoteApp)
What Azure RemoteApp is not so good at.
* Running GPU enhanced workloads (Since Azure RemoteApp only uses TCP you have low performance on GPU stuff)
* When you want statefull RSDH deployments (And other ways to manage profiles)
* Single instances of ERP/CRM systems on a RDSH server (many want this type to replace their current server, but this is hard because of the stateless feature of RemoteApp)
* Appliations that require use of backend database (Since Azure RemoteApp does not have a integration between a regular IaaS platform in Azure you need to setup a S2S VPN which generates a higher bill.
Now even thou this is a first release, Microsoft is a good step in the right direction, but they just need to make it easy for admins to add custom images directly from Azure, integrate with existing IaaS in Azure and of course simple things like Shadowing and controlling policies directly from the management portal.
Now the last couple of months I’ve again been involved with a Netscaler book project with Packt. This is a more advanced book then my previous book with was a more introduction to Citrix Netscaler.
This new book is called Mastering Netscaler which has more in-depth information regarding load balancing, appfirewall and such.
But… I kinda feel that this book just covers a fragement on what users want to read about when they buy a book about Netscaler.
Therefore in order to get things right, I was thinking about creating a third book about Netscaler which covers all the subjects, stuff you want to read about. Therefore this post is merely for you to give feedback to me
If you could please give me a few senteces about what YOU would want to include in a Netscaler book ? Please drop a comment below this post.
and if you are willing to help me form and maybe contribute to the outline and possibly help me write the book as well that would be great, just send me email to firstname.lastname@example.org
The last couple of days has been active for Vmware, with the release of vSphere 6.0 and with over 200+ features/enhancements. Also with an addition to vGPU annoucement which is a serious improvement to end user computing stack in Vmware. But… for my part with limited hours in the day I’ve been busy giving vCloud a good run. Coming from Microsoft land and being familiar with Amazon AWS, its quite a jump to vCloud air. Let’s do some initial comparisons first. Both Amazon and Azure comes from a PaaS point of view, wrapping a lot of different predefined services and then moving into IaaS.For instance in Amazon and Azure I am bound to creating a virtual machine instance set to a predefined size (of course I can scale out later) but in vCloud I have a set of resources which I can mold in any shape I like. I can define CPU, memory and hot add disks for instance.
Vmware on the other hand is fully dedicated to IaaS, and the sole purpose is to deliver virtual infrastructure. Either if its a extension of our own infrastructure or if it is a dedicated on-demand cloud. Now I’ve been testing alot of different performance benchmarks on Azure which can be seen here –> https://msandbu.wordpress.com/2015/01/08/azure-g-series-released-and-tested/ https://msandbu.wordpress.com/2014/12/17/windows-azure-and-storage-performance/
Today I decided to dig a bit deeper in vCloud what some of the capabilities. Vmware has a 300$ free trial for on-demand services that can be used for those that aren’t wanting to use a credit card to give it a test-run.
Now after setting up a initial account we have to create a service to it.
The user interface is pretty easy to use. I signed up for a virtual private cloud on-demand, which is one of the “plans” that Vmware has.and therefore I can only create a private cloud in one of these datacenters.
So after choosing a location for the private cloud, it takes some time before all configuration is done since its creating a virtual datacenter , gateway and routed network. After this is done I can go ahead and create resources
Now what I did capture here is that before I choose hardware for the virtual machine I can choose different operating systems from a gallery (I can also create a custom image from scrtach)
Here I can customize the hardware on how much memory I can attach, CPU (There is also a link between cPU and memory) and also I can choose SSD based or regular HDD based storage. For the purpose of this post I started out with a regular based HDD and did a hot-add SSD after to show the difference. Note that provisionig stage here took like 10 mins! after that it was done I was able to login to my desktop. One thing I like about this is that I can start a console connection to the VM directly from the web console (yes it requires a plugin installation)
I can also go in and edit the resources directly. and I also have the option to manage all the resources in vCloud director, which vCloud air runs on top of. I can also create a snapshot directly in the console. So this is something I miss in Azure and Amazon.Now it took sometime before I was able to use the VM to communicate to the outside world, but this was because I needed a public IP which I could attach to the virtual network but after this was done I could get to my tools
I did a quick CPU chart and HD tach to get some more info about what hardware was underneath here. So my vserver was running on top of a Intel Xeon (Ivy-Bridge) CPU
which was new info to me, but the most interesting part is the disk performance. As I mentioned I added a regular HDD disk of 40 GB (Which the OS is installed on) where I did a simple read test (similar to the one on Azure disks as well) where I got some interesting results
This test was consistent the 5 times I ran it with an -5+% result difference. Then I ran the test on a SSD based disk, which gave me alot better performance.
Now these initial tests were just to give me a simple overview of how the performance is, But I have to say that based on my initial testing in vCloud air, Vmware is setting the standard for how an IaaS cloud should perform.
UPDATE 1: I did a new test using my locally installed SSD drive which is a Samsung 840-series and a test against a VM running in Azure on a storage premium data disk on the same size. Now I did setup a Azure premium VM with the largest size of storage premium which gives me 5000 IOPS instead of the regular 500 (which is a max cap)
When running against my local SSD drive I get the performance that is promised. About 520 Read and 420 Write MB/s. And a pretty decent amount of IOPS. Next is against vCloud air Storage premium data disks.
Which I see has a pretty decent amount of read and write which is about 200 MB/s which is what the maximum cap is at (and I have about 5000 IOPS)
vCloud on the other hand has no limits to disk IO and therefore has no “restraints” it has better troughput but lower IOPS then my disk. This might be because of cache, latency or block sizes which I didn’t take a closer look at.
Still i’m guessing that Vmware has to at some point add some restraints to their cloud as well. Since it cannot scale out so much without being able to measure the capabilities on each customers.
Since the release of vworkspace 8.5 I’ve been wanting to try out the HTML 5 connector properly! we have a lab enviroment where we have it deployed and it works amazingly fast inside the local network.
But… I also want it available from outside our local network, therefore I decided to publish it using our Netscaler. Now the HTML 5 connector from Dell is like the one on Storefront, it runs on top of the web access server and we can use that as an proxy to access applications and desktops.
Now initially I wanted to publish the connector using SSL offloading, meaning that users could access the HTML 5 connector on a SSL enabled vServer and that Netscaler would do the SSL processing and the web access server would get non encrypted traffic via port 80 but… when I got this up and running all I got was error messages.
Didn’t see alot of useful info in the logs as well which could lead me to the error.
2015-01-20 08:59:45.078 – 844 – RdpProxy – ERROR – Server exception.
System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host
at Freezer.Common.Utils.readAll(Socket socket, Byte& data) in d:\Build\349\vWorkspace\Elbling\Sources\SRC\Freezer\IIS\Freezer\Common\Utils.cs:line 121
at Freezer.Common.SocketStateObject.handleSocket(Object o) in d:\Build\349\vWorkspace\Elbling\Sources\SRC\Freezer\IIS\Freezer\Common\RdpServer.cs:line 160
2015-01-20 08:59:45.078 – 4780 – UserStatecbf3bb31-bd6e-7cdf-5e50-f21fccda8e4 – DEBUG –
2015-01-20 08:59:45.078 – 1000 – UserStatecbf3bb31-bd6e-7cdf-5e50-f21fccda8e4 – DEBUG –
2015-01-20 08:59:45.078 – 1692 – UserState – DEBUG – RDP ProcessExited for: [id_1421740273901]
2015-01-20 08:59:45.078 – 1692 – UserState – DEBUG – RDP ProcessExited: Cleaning up for [id_1421740273901]
2015-01-20 09:00:14.828 – 144 – UserStatecbf3bb31-bd6e-7cdf-5e50-f21fccda8e4 – DEBUG – Message received: AS00000704: handle_print_cache( 00DEE778 )
2015-01-20 09:00:14.828 – 144 – UserStatecbf3bb31-bd6e-7cdf-5e50-f21fccda8e4 – DEBUG – 00000704: ignoring an UPDATE PRINTER event
What I did see on the other hand was that my browser which was running the JS did try to open a connection directly to 443
clientSide: wss://demossoproxy.dsg-iam.com/vWorkspace/Freezer/api/Image?sessionId=id_1421175921207 (wss is SSL based websocket connection)
but since my web accesss server was running only on port 80 it didn’t work well. Therefore I changed the setup a bit. Instead of SSL offloading I tried with SSL bridging, so I moved the encryption back to the web access server and just used SSL multiplexing, which actually worked!
I’m guessing that the websocket connection requires the same port externally and internally, since I didn’t troubleshoot it anymore. So here Is a little clip of how fast the HTML5 connector for Dell vWorkspace is.
The independent R&D project ‘Virtual Reality Check’ (VRC) (www.projectvrc.com) was started in early 2009 by Ruben Spruijt (@rspruijt) and Jeroen van de Kamp (@thejeroen) and focuses on research in the desktop and application virtualization market. Several white papers with Login VSI (www.loginvsi.com) test results were published about the performance and best practices of different hypervisors, Microsoft Office versions, application virtualization solutions, Windows Operating Systems in server hosted desktop solutions and the impact of antivirus.
In 2013 and early 2014, Project VRC released the annual ‘State of the VDI and SBC union’ community survey (download for free at http://www.projectvrc.com/white-papers). Over 1300 people participated. The results of this independent and truly unique survey have provided many new insights into the usage of desktop virtualization around the world.
This year Project VRC would like to repeat this survey to see how our industry has changed and to take a look at the future of Virtual Desktop Infrastructures and Server Based Computing in 2015. To do this they need your help again. Everyone who is involved in building or maintaining VDI or SBC environments is invited to participate in this survey. Also if you participated in the previous two editions.
The questions of this survey are both functional and technical and range from “What are the most important design goals set for this environment”, to “Which storage is used”, to “How are the VM’s configured”. The 2015 VRC survey will only take 10 minutes of your time.
The success of the survey will be determined by the amount of the responses, but also by the quality of these responses. This led Project VRC to the conclusion that they should stay away from giving away iPads or other price draws for survey participants. Instead, they opted for the following strategy: only survey participants will receive the exclusive overview report with all results immediately after the survey closes.
The survey will be closed February 15th this year. I really hope you want to participate and enjoy the official Project VRC “State of the VDI and SBC union 2015” survey!
Visit http://www.projectvrc.com/blog/23-project-vrc-state-of-the-vdi-and-sbc-union-2015-survey to fill out the Project Virtual Reality Check «State of the VDI and SBC Union 2014″ survey.
Having heard the buzz about Cloudphysics I decided to take it for a test drive, since they have a free edition which gives some limited options but it allows me to see how the software works. Cloudphysics is almost pure SaaS solution. It requires that we first download an virtual appliance that communicates against vCenter 4.5 <+ but it reports all the data back to cloudphysics which runs all the diagnostics and reporting.
Cloudphysics has features like
* Capacity planning
* Performance troubleshooting
* Health checks
* Alerting (and so on..)
So how to get started ? Sign up for a free edition here –> http://www.cloudphysics.com/get-cloudphysics/
Then download either the OVA or point the vCenter to fetch the OVF files from the portal.
On the vCenter side, you just have to import the machine and enter network information during the setup.
After that you just have to wait until it is finished installing. Then start it and configure the last parts. Just we need to enter the vCenter information and the UserID that is used to the cloudphysics account.
After that is done, it takes about 30 seconds and information will be pulled to CloudPhysics service. No CloudPhysics have a concept called «Cards» which are different reports, features and so on. For instance one of the cards is «Snapshots gone wild»
There are a bunch more of these reports and well, but you get the picture.
Now this is a golden example of how we can use a SaaS for report and monitoring purposes. CloudPhysics also has cost calculators for Amazon, Azure and Vmware AIR which allows us to see how much it will cost to move our VMs to one of these providers, but this is only available for premium customers.
Today Microsoft released their G-series instances in Azure. This new instance is using a newer Intel Xeon based CPU and also with local SSD disk.
“G-Series VM Sizes availability
Today, we’re announcing the General Availability release of a new series of VM sizes for Azure Virtual Machines called the G-series. G-series sizes provide the most memory, the highest processing power and the largest amount of local SSD of any Virtual Machine size currently available in the public cloud. This extraordinary performance will allow customers to deploy very large and demanding enterprise applications.” and we can have up to 64 data disks as well. http://azure.microsoft.com/blog/2015/01/08/azure-is-now-bigger-faster-more-open-and-more-secure/
So still we have a local SSD drive and Intel XEON CPU how does it perform compared to a regular A-instance ?
I did some regular disk benchmarking with HD tach tune.
Read benchmarking test on G-series SSD based instance
Read benchmarking test on A-series HDD based instance
So after comparison we can see that the CPU usage is lower on the Intel based instances because it is much more efficient then the AMD based instance. We can also see that is has better performance then a regular HDD based. If we do a similar test on a attached data disk on both instances
G-series instance data disk READ
A-series instance data disk READ
We can see that the results are almost the same but the CPU usage is again lower. Now even if this instance can up to 64 data disks, don’t think of using it with storage spaces yet, wait until Storage premium available
For those working with Netscaler, I often stumple across those that don’t size packet engines properly on Netscaler VPXs.
By default, when deploying a Netscaler VPX is comes 2 vCPU and 2 GB memory. Of those 2 vCPU one is used for management purposes and the second vCPU is used for packet flow. It handles load balancing, compression, content switching and so on. (CPU 0 is the management core)
So how can we can the utilization of these CPUs ? (and no we cannot use regular unix tools like top and so on, they will not display it properly since the packet engine core is always looking for work to do it will be reported as utilized even thou there isn’t any work for it, that’s why we need to use stat system)
We can use the commands stat cpu and stat system
In a regular VPX we can only see one packet engine CPU because of the two vCPUs.
Now for a regular VPX 1000 we can have a maximum of of 3 packet engines, meaning a total of 4 vCPU (also meaning that we need to add more memory to the VM) you can see the chart from Citrix here –> http://support.citrix.com/article/CTX139485
So let’s do a quick comparison if these changes improve our performance. The first here is displayed on a VPX 1000 with 2 vCPU and 2 GB memory. The second is further down in VPX 1000 with 4 vCPU and 8 GB memory.
(NOTE: Multiple packet engines are not available on Hyper-V, only Vmware and Xen) and note that this is CPU dependant as well, the better the CPU the better SSL performance)
Now in order to test this I used a benchmarking tool from apache called ab (stands for apachebench)
It creates multiple requests against a virtual load balanced vServer. It a regular HTTPS vServer which the benchmark is going run against. Since this test is going against a regular HTTPS traffic .
ab -n 50000 -c 1000 http://192.168.10.32/index.html (This will do a benchmarking test using HTTP GET) with 50000 requests with 1000 concurrent requests against a web address
Now notice this is the first run (The packet engine CPU is over 90%) a bit more packets here and my Netscaler would be unable to process the packages.
When I ran the same test against 4 vCPU (Where 3 are PE) I get a more load distributed result (Here I just used the stat cpu command to see load on each individual PE)
So remember, scale PE accordingly! if you are unsure if you need to scale out take a look at your current enviroment with stat CPU during the busiest part of the day.
I’m getting a lot of search words on my blog regarding «Lync and Netscaler setup» «load balancing Lync» «Lync and HA Netscaler» «Lync and Reverse proxy». Probably because I have alot of content around Netscaler. But anyways to answer the question, can we use Netscaler to do all these things ? Load balancing, high-availability and reverse proxy for Lync 2013?
Sure we can, I even recommend it.
Citrix Netscaler is supported by Microsoft as a Load balancer for Lync http://technet.microsoft.com/en-us/office/dn788945 as a hardware appliance and as a pure virtual software appliance. Citrix has also made a deployment guide which shows how we can deploy Lync using Netscaler http://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/microsoft-lync-2013-and-citrix-netscaler-deployment-guide.pdf
You can also read more about it, in this datasheet here –> https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/citrix-netscaler-datasheet-microsoft-lync-2013.pdf
So why should you use Netscaler for Lync ? according to Gartner, they are one of the few ADC recognized as a leading product.
Remember to use different TCP profiles for outside traffic and inside traffic since this will drastically improve the network performance. (If you are using TCP for SIP traffic, and for other TCP based connections)