Summary of todays annoucements at Citrix Synergy

So like many people I have been watching todays keynote from the comfort of my own chair, and taken notes during the entire keynote. So people don’t have to read all the annoucements from every minor annoucement and rewatch the keynote here is a summary of todays news:

XenDesktop 7.9

XenDesktop 7.9 was announced, so what are the features which are included as part of this release?

  • Federated Authentication Service (Which will now finally being able to do full SAML based authentication from and endpoint to a Citrix Session) This is something I have written about before, (https://msandbu.wordpress.com/2016/03/04/setting-up-saml-authetication-for-netscaler-and-storefront-with-sso/) So I’m guessing it is going to be an extesion of that feature, welcome back!
  • Citrix MCS and Nutanix integration (This is not a new feature, it was announced a while back but now we finally know that it will be available for XenDesktop 7.9 customers, which will allow direct connection to Acropolis Hypervisor
  • Intel Iris Pro graphics technology (This is for customers who want to leverage Intel GPU in conjunction with XenServer 7 which I will discuss a bit later)
  • MCS with RAM-based caching (This will allow us to specify a RAM based caching mechanism, I’m thinking simliar to PVS

  • Provisioning Services (BDM configurations and updates for simplified deployment as well as supporting modern firmware including UEFI)
  • New releases to Universal Print Server and Universal Print Driver
  • Remote PC Access for Windows 10 machines
  • CentOS support for Linux server-based and VDI desktops
  • New Storefront version with suppor for Windows Server 2016 TP5
  • Citrix Receiver for Android, Chrome and HTML5
  • New System Center Operations Manager bundle, Citrix Connector for System Center Configuration manager and an updated version of AppDNA.

So some other important updates which most likely comes with XenDesktop 7.9, and also shows the strong partnership with Microsoft and Citrix

  • Support for Azure Resource Manager deployments, which is the de facto standard when deploying stuff in Azure these days.
  • Windows 10 VDI deployment from Citrix? Soooo many were suprised by this announcement, but from a licensing perspective this has been available for quite some time (http://www.zdnet.com/article/microsoft-to-enable-users-to-run-windows-10-on-azure/) This requires that we are running the latest Current branch for buisness

Will this finally allow DaaS from Azure? So it will be interesting to see.

Also there are some new updates which came from the session afterwards.

  • Zone Preference and failover
  • Local Host Cache

image

  • AppDisks for Hyper-V and Acropolis (Coming soon…)

XenServer 7

Also Citrix announced a new version of XenServer which came with alot of new features. For instnace

  • GPU support for Intel Iris Pro
  • support NVIDIA vGPU with Linux virtual machines
  • supports up to 128 NVIDIA GRID vGPU-enabled VMs
  • Direct Inspect APIs (these APIs allow third-party security vendors to partner with Citrix in providing the next generation of virtual infrastructure protection, leaving malware, viruses and rootkit zero-day attacks no place to hide within the VM, and unable to compromise the security software) BitDefender is one of the vendors which already support this –> http://www.bitdefender.com/business/hypervisor-introspection.html
  • Support for the SMB protocol for virtual machine storage in XenServer (Important that XenServer do not now support all the SMB protocol features like failover, multichannel and such.
  • The largest, most important update which I found was (XenServer is now the first hypervisor to offer integrated management of Docker containers on Linux and Windows) With Microsoft’s investment into Docker/Containers and with Citrix moving towards with Container support for NetScaler as well makes XenServer really interesting for enviroments where Containers make sense.
  • Automated Microsoft Windows VM driver management
  • New Microsoft System Center Operations Manager (SCOM)
  • Microsoft Active Directory integration
  • Templates for Windows 10 and Windows Server 2016
    • XenServer Health Check – provides proactive, regular and automated health checks and reporting
    • XenServer Conversion Manager – now supports batch conversion of all versions of Windows and simplifies migration from VMware
    • Significant scalability improvements –
      • 5x increase in supported host RAM (up to 5TB)
      • 2x increase in support for CPU cores (up to 288)
      • 8x increase in VM RAM (up to 1.5TB)
      • Support for Citrix AppDisks (up to 255 virtual disks per VM)

Nutanix and Citrix

I already mentioned some of the integration option with MCS and Nutanix, but they also announced an Citrix + Nutanix appliance called InstantON VDI –> https://www.citrix.com/blogs/2016/05/24/introducing-nutanix-instanton-vdi-for-citrix/

NetScaler

My favorite topic! Now there are some annoucements that have been made

  • Containerized NetScaler with CPX (You can read more about it here –> http://bit.ly/1rMh6Ug)
  • NetScaler management and analytics system (Which is an integrated Command Center and Insight, you can read more about it here –> http://bit.ly/1VVPhW5)
  • Cloudbridge renamed to NetScaler SD-WAN
  • Microsoft will embed NetScaler capabilities into Intune App SDK which will enable apps to securely access on-premises assets without having to launch a VPN

So the future for NetScaler is looking bright, I will be writing more about what the future holds for NetScaler, hold on! Smilefjes

XenMobile (heart) EMS? 

I saw on twitter that were alot of debate of the Microsoft and Citrix partnering with XenMobile and EMS. It is important to remember that EMS is NOT ONLY INTUNE, its a whole lot more.

  • Azure MFA (Multi factor authentication)
  • Azure Active Directory
  • Azure Rights Management
  • Microsoft ATA
  • and of course Intune

But alot of the confusion is XenMobile dead? no… important to remember that intune has MaM capabilities for most of the Microsoft based applications, but XenMobile will be able to leverage these capabilities, basically means that XenMobile will be using the Intune SDK to do this.

So alot of the integration stuff well be of course with NetScaler and Azure AD for identity purposes and being able to do SAML based authentication across to Azure AD as an SAML iDP. Citrix will also embed a number of EMS capabilities into XenMobile, such as self-service password reset and multi-factor authentication (MFA).

Funny thing is this quote which can be found from Brad Anderseons blog http://bit.ly/1TxIT1S : Future collaboration will also include Citrix building a new EMM service on Azure that will integrate with and add value to EMS

Finally, if you are an EMS customer, start getting educated on NetScaler.  You’ll be able to define conditional access policies in EMS/Intune in the 2H 2016 that NetScaler will enforce on a per-device, per-app and micro-VPN basis.

New award Citrix Technology Advocate (CTA)!

There is no denying that I have been working alot with Citrix stuff books/blogging/speaking/community stuff, yesterday I was notified from Citrix that I have been awared with a new community title from Citrix called Citrix Technology Advocate (or CTA)

I’m grateful and honored that Citrix is recognizing my work I do for the community around Citrix, and I will continue to do so as well! and I’m joined with other talented people on the list as well (Which many I know from the community)

I look forward to contributing more and working and evolving the Networking SIG!

The entire list can be found here –> https://www.citrix.com/blogs/2016/05/23/expanding-recognition-for-community-contributors-citrix-technology-advocates/

Microservices and Containers, how does Windows Server 2016 fit into the mix? With drawnings!

After spending alot of time trying to understand and grapsping the concepts of Containers and Microservices I decided I wanted to share the understanding I have, hopefully it will enlighten others as well. Also this is going to be from a Microsoft perspective and how the new features in Windows Server 2016 fit into this ecosystem of Microservices and Containers.

Traditional systems

So before we dive into the new stuff, we have to talk about how the old stuff works and how the new stuff fits into this way of delivery services. So if we look on tradisional services they are often Monolithic based, which means that most components are interwoven.

image

So in terms of a simple eCommerece website we have all the different compoents which makes up the site installed on the same server. It simple to manage and if we need to scale we hve to provision another virtual machine and configure the load balancer as well. This approach has some downsides to it.

  • Troubleshooting, even thou we can easily isolate a server (its basically removing it from the load balancing pool) a monolithic system might have quite complex coding which might make it hard to troubleshoot and debug
  • Difficult to scale, even thou it is simple to provision a virtual machine it takes time to configure it, and another things is that components to scale independently. For instance we might see an uprising on user which are browsing our catalog but not actually buying stuff, so even if we just need more resources to the inventory component all our services will be scale when settting up a virtual machine.
  • Security risk, since all services run on the same server it might make the code more unsecure and it might expose more information to attackers which can compromise the server.

So to handle this issues, more and more are moving towards using microservices architecture, which basically means that we split out all these components into their own entity, which can mean a virtual machine for instance. This gives us the flexibility that we can scale out components individually and that we can easier troubleshoot and debug, and also do updated to a system, instead of updating the entire system we can update each component seperatly.

Microservices to a certain degree

image

Now the downside to this approach is the added overhead to have each service run inside its own virtual machine (Since all of the services needs to have its own guest operating system installed, and since Windows Server is a multi purpose virtual machine there are alot of services / components which are not required in order to deliver the service, so Microsoft has a new deployment model here called Nano Server!

Nano Server

So if we compare it to a tradisional server (Standard or datacenter)

image

Its a “headless” OS, 64-bits only, no UI and is going to be the foundation for web services moving forward for Micrrosoft. Nano Server can be used on the hypervisor level and in the virtual machine layer. Nano Server provides a smaller footprint and less overhead and with less required patching it will also limit the attack surface and make services more robust as well. This will fix some of the issues with microservices delivery in Windows Server 2016 since the services can for instance be deployed in their own Nano Server VM.

image

So still if I have Nano Server on the Hyper-V host I can still setup a virtual machine using the full Windows Server deployment. Now using Nano Server in the entire stack I will limit alot of the overhead, but I still need to provision the virtual machines in order to scale out my architecture. So what else can we do? Enter Containers!

Now the simplest way to describe Containers is saying “Operating system virtualization” yeah Operating system virtualization. Which is different from tradisional virtualization which is “machine virtualization”

image

So we have a set of hardware components which are virtualized and pre-defined with a set of resources like CPU, Memory and disk, and in there we have to install a guest OS.

Containers

While Container on the other hand is a way to split up an operating system into seperate entities on the SAME hardware, so each container is running on the same underlying operating system but each container has its own (filesystem, registry, networking stack)

image

Each Container can run its own service, and since its NOT a virtual machine we have even LESS overhead that regular virtual machines. Having containers allow us to easily scale out each component seperatly. Need more resources to a specific component ? Spin up a new Container!

Containers provisioned on top of a physical host or within a virtual machine, so if we now look back to the visio drawning we can see that we have addresses the issues.

image

We now can have virtual machines which can host many containers, for instnace they can have all the same service containers on the same virtual machine or have all the different services on the same virtual machines for instance. But now we have addressed the issue which are having the ability to scale up a dedicated instance of a service, with a limited amount of virtual machines.

Now what’s missing here to make this picture more complete? Right… Some network features which would make it easier. Since we have multiple instances of each service and we are going to have more east/west traffic this will make us more dependant on a load balancing feature, also more east-west traffic makes us a bit more “blind” in terms of malicious traffic as well.

So Microsoft has done alot in terms of networking. First of since there are more and more networking features becoming “virtualized” therefore Microsoft needed to rewamp their existing networking stack NDIS (Which is pretty general purpose) therefore they introduced PacketDirect which will allow for higher troughput, lower resource overhead, lower latency and is aimed for pure datacenter networking 40/100GB

Also Microsoft Introduced a Software Load Balancing virtualized feature in 2016 which operates at layer 3 and 4, also they introduced a distributed firewall feature which works at a vSwitch level, which allows us to specify ACLs on the 5-tuple, IP, PORT and Protocol even on the same subnet. So by implementing these features, we can easier leverage Containers, Microservices architecture, implement security and load balancing capabilities between the different tiers.

image

So hopefully this post gave you bit more of an understanding of Nano Server, Containers, the approach to Microservices and some of the new networking capabilities available in Windows Server 2016.

So why choose Citrix over Microsoft RDS?

So this question came up a couple of days ago in my inbox, what actually makes customers choose Citrix over plain RDS?
Isn’t RDS good enough in many circumstances? and has Citrix out-played its role in the application/desktop delivery marked? Not likely… So this questions has also appeard in my head many times over the last year, what is an RDS customer missing out on compared to XenDesktop? So therefore I decided to write this blogpost showing the different features which IS not included in RDS and an architectual overview of the different solutions and strenghts to both of them. NOTE: However I’m not interested in discussing the pricing here, I’m a technologist and therefore this is mostly going to be a feature matrix show-off

Architecture Overview

Microsoft RDS has become alot better over the years, especially with the 2012 release and actually having central management in Server Manager, but alot of the architecture is still the same. Also that we can now have the Connection broker in Active/Active deployment as lon as we have a SQL server (Note: 2016 TP5 now supports Azure Database for that part) External access is being driven by the Remote Desktop Gateway (Which is a web service to forward proxy TCP and UDP traffic to the actual servers / vdi sessions) and we also have the web interface role where users can get applications and desktop and allow them to start remote connection.

image

But still the remote desktop application which is built-into the operating system still does not have a good integration with a RDS deployment to show “buisness applications” and with Microsoft pushing alot to Azure they should have a better integration there to show buisness applications and web applications from the same kind of portal.

From a management perspective as I mentioned still done using Server Manager (Which is a GUI addon to PowerShell where also alot is done, but server manager is still kinda clunky for larger deployments and also it does not give any good insight in how a session is being handled or such, you would require to have System Center or digg into events logs or third party tools to get more information. But we can now centrally provision the different roles directly from Server Manager and the same with application publishing which makes things alot easier!

Citrix has adopted the FMA architecture from the previous XenDesktop versions, but the architecture might still resemble RDS. NOTE: That the overview is quite simplified but this is because I will dig into the features later in the blog. With Citrix has have more moving parts. Yet a bit simplified. With RDS I would need a load balancer for my Gateways and Web Interface servers. With Citrix in larger deployments you have NetScaler which can serve as an Proxy server and load balance the requires Citrix services as well. Also with Citrix we have a better management solution using Desktop Studio, which also allows for easy integration with other platforms and also simple image management using MCS (Which is another topic as well) plus that we have Director as well which can be used for troubleshooting and monitoring of the Citrix infrastructure as well.

image

The Protocol

So in most cases, and what I often see as well is HOW GOOD IS THE PROTOCOL? Again and again I’ve seen many people state that RDS is as good as Citrix ICA, but again ill just post this picture and let it state the obvious. You need facts!

Luckily I’ve done my research on this part.

While RDP as mostly a one-trick pony which we can do some adjustments in Group Policy to adjust the bandwidth usage, it is still quite limited to the TCP stack of the Windows NDIS architecture, which is not really adjustable.

(ThinWire vs Framehawk vs RDP) https://msandbu.wordpress.com/2015/11/06/putting-thinwire-and-framehawk-to-the-test/
Now with Citrix we can have different protocols depends on the use-case, for instance me and a good friend of mine, did an Citrix session over a 1800 MS latency connection using ThinWire+ and it worked pretty well, while RDP didn’t work that well, on another hand we tried Framehawk on a 20% packet loss connection where it worked fine and RDP didn’t work at ALL.

But again this shows that we have different protocols that we can use for different use-cases, or different flavours if you will. 

clip_image002

Another trick to it is that in most cases, Citrix is deployed behind a NetScaler Gateway, which has loads of options to customize TCP settings at more granular level then we could ever do in Windows without messing in Registry in some cases. So is RDP a good enough protocol for end-users? Sure it is! but remember a couple of things

  • Mobile users access using a crappy Hotel Wifi (Latency, packet loss)
  • Roaming users on 3G/4G connection (TCP retransmissions, packet loss)
  • Users with HIGH requirements in terms of performance (Consuming alot of bandwidth)
  • Connections without using UDP (Firewall requirements)
  • Multimedia requirements (3D, CAD applications)

With these types of end-users, Citrix has the better options.

Image management

Image management is the top crown, being able to easily update images and roll-out the changes when updates are needed in a timely fashion without causing to much downtime / maintance.

With RDS there is no straight forward solution do to image management. Yes RDS has single-image management but this is mainly for VDI setups running on Hyper-V which is now the supported solution for it. But a downside to this is that it requires Hyper-V in order to be able to do this using Server Manager.

Citrix on the other hand has many more options in terms of management OS image management. For instance Citrix has Machine Creation Services which is a Storage way to handle OS provisioning and changes to virtual machines, which I described in my other post on MCS and Shadow Clones ( https://msandbu.wordpress.com/2016/05/13/nutanix-citrix-better-together-with-shadow-clones/ )

image 

Also Citrix has Provisioning Services, which allows Images to be distributed / streamed using the network. So virtual machines and physical machines can be configured with PXE boot and stream and operating system down and store in RAM. Doing updates to the image just requires an reboot.

Another thing to think about here is the hypervisor support, where in most cases PXE supports both physical and virtual. MCS is dependant on doing API calls to the Hypervisor layer, but it already has support for

  • Vmware
  • XenServer
  • Hyper-v w SCVMM
  • Azure
  • Amazon EC2
  • Cloudplatform

Other features that Citrix has

  • RemotePC (This golden gem which allows a physical computer to be accessed remotely using the same Citrix infrastructure) just need to install an VDA agent and publish it and can then be accessed using Citrix Receiver. Even thou if Microsoft has RDP built into each OS there is not central management of it and there is no support to add these to the gateway builtin, each user has to remember the IP and FQDN in case.
  • App-V and Configuration Manager integration and management (Citrix actually has App-V management capabilities directly from Studio, they also have an integration pack with Configuration Manager which allows for use of WoL for RemotePC for instance. It can also leverage the Configuration Manager integration do to application distirbution and direct publishing for that leverage Configuration Manager heavily
  • Personal vDisk and AppDisks. Note that RDS has something called User Profile disks, but that is a primitive VHDX user profile mapping. Personal vdisk and AppDisks are more layering capabilities which allows us to store Personalization and Applications into their own layer. For instance use of AppDisks makes application distribution easier since all we have to do is attach an layer to the virtual machine (note that appdisks supports XenServer and VMware as of now)
  • VM hosted application (allows us to publish applications which for under some scenariones can only be installed on a client computer)
  • Linux support (Citrix can also deliver virtual desktops or dedicated virtual desktops from Linux using the same infrastructure)
  • Full 3D support (Microsoft still has alot of limitations here using RemoteFX vGPU, but Citrix has multiple solutions for instance to do vGPU from NVidia or do GPU-passtrough directly from XenServer (Note this is also supported on VMware)
  • Full VPN and endpoint analysis using NetScaler Gateway (NetScaler Gateway using Smart Access has alot of different options to do endpoint analysis using OPSWAT before clients are allowed access to a Citrix enviroment.
  • Skype for Buisness HDX optimization pack (Allows to offload Skype audio and video directly to an endpoint from the servers)
  • Universal Print Services (Allows for easier management of print drivers)
  • System Center Operations Manager management packs (Part of the Comtrade deal which allows platinum customers to use management packs from ComTrade to get a full overview of the Citrix infrastructure.
  • More granluar control using Citrix Policies (Which allows us to define more settings on Flash redirection, Sound quality, bandwidth QoS and much more)
  • HTML5 based access (Storefront supports HTML 5 based access, which opens up for Chromebook access, Microsoft is still developing their HTML 5 web front-end)
  • Application Compability Analysis (AppDNA)
  • Hell of a lot better management and insight using Director!
  • Local App Access (Allows us to “present” locally installed applications into a remote session)
  • Better Group policy filtering (based upon where resources are connecting from and using Smart Access filters from NetScaler)
  • Performance optimization (Using for instance PVS and Write Cache to RAM with Overflow to Disk you don’t have to be restrained to the resources on the backend infrastructure, but allows for a better user experience
  • Zone based deployment which allows users to be redirected to their closest datacenter based upon RTT
  • Mix of different OS-versions, with Citrix we have an VDA agent that can be used on different OS versions and be managed from the same infrastructure while Microsoft has limited management for each OS version.

NOTE: Did I forget a crucial feature or something in partciular please let me know!

Summary

So why choose Citrix over Microsoft RDS? Well to be honest Citrix has a lot of feature which makes it more enterprise friendly.

  • Easier management and monitoring capabilities
  • Better image-management and broad hypervisor/cloud support + Performance Optimization
  • Better protocol which is multi-purpose (ThinWire, Framehawk etc)
  • Broader support for other ecosystem (Linux, HTML5 Chromebooks)
  • NetScaler (Optimized TCP, Smart Access, Load balancing)
  • GPU support for different workloads
  • Remote PC support
  • Collabaration support with Skype for Buisness
  • Zone based deployment
  • Layering capabilities (Personlization and Application)

So to sum it up, you can have a Toyota Yaris which can get you from A to B just fine or you can have a garage filled with different cars depending on requirements with bunch of different features which makes the driver experience better, because that is what matters in the end… End-user experience!

Setting up the NetScaler CPX load balancing on a Ubuntu Docker host with Nginx

After being a unicorn for some time, Citrix did finally release the Docker based NetScaler called CPX!
NOTE: CPX can be downloaded from here –> https://www.citrix.com/downloads/netscaler-adc/betas-and-tech-previews/cpx-111-405.html If you have the proper Citrix Partner access.

But as of now the CPX can be used in two ways, either be deployed on a Ubuntu host using Docker or using the NetScaler Management and Analytics integration with Mesos and be provisioned from there.

So as of now the requirements are

  • 1 CPU
  • 2 GB RAM
  • Linux Ubuntu version 14.04 and later

So the easiest way is to download Ubuntu server from (http://www.ubuntu.com/download/server) needs to be 64-bit!!

(I’m not going to cover how to install an Ubuntu server, but show the steps that are needed in order to set it up as an docker host, and note I’m using Ubuntu version 14.04, and it must have internet access in some way, either using a proxy or a direct connection in order to download the required files.

The simplest way is to set it up using an SSH server, which makes it easier to work with it from a remote session
sudo apt-get install openssh

Then we need to add a couple of requirements to the Ubuntu host in order to install Docker

sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates

sudo apt-key adv –keyserver hkp://p80.pool.sks-keyservers.net:80 –recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Add a new repository

sudo vi /etc/apt/sources.list.d/docker.list

Add the following

deb https://apt.dockerproject.org/repo ubuntu-trusty main

apt-get install apparmor

Then you need to install the docker-engine

sudo apt-get install docker-engine

sudo service docker start

Then to verify that docker is running use the command

sudo docker run hello-world

image

So now we can run the sudo docker command and we can see which attributes it can support, we can also run the sudo docker images to see which container images are available on the host.

image

After that we have to extract the CPX from the tar file

tar -xvzf cpx-11.1.40.5.tar.gz

Then change directory to the CPX folder

Then run the make command from within the directory (NOTE: You need to have make installed which can be installed using apt-get install make) This creates an docker image based upon the attributes in the makefile. After it is complete you can view the CPX image by using the command

NOTE: This might take some time and requires additional components to configure properly, it will also download the ubuntu docker image

sudo docker images

image

Now let’s create a container from the CPX image

sudo docker run -dt -p 22 -p 80 -p 161/udp –ulimit core=-1 –privileged=true cpx:11.1.40.5

If you run the sudo docker ps you will see the container running

image

Now that we can see that the CPX is running as it should, we can now enter is using SSH. Notice the 0.0.0.0:32769 port (Which is used for SSH server), this will be used to open an SSH session to that particular Container

ssh -p 32769 root@127.0.0.1 (The default administrator credentials to log on to a NetScaler CPX instance are root/linux.)

Now since the CPX is not an ordinary NetScaler we have to wrap commands using a bash script. So for instance if we want to use the show ns config command we have to run it using the

cli_script.sh «show ns config»

image

And note: CPX can only be configured using CLI or using Nitro API or using the NetScaler Management and analysis virtual appliance.

So to setup a sample load balancing containers we have a sample container running nginx in the backend on its seperate container. In order to do that we need to have nginx docker image downloaded, which can be setup using this command from the ubuntu host –>

sudo docker pull nginx

image

Then we are going to setup a docker container from the nginx image

sudo docker run –name docker-nginx -p 80:80 nginx (This is going to expose the port 80 on the ubuntu host to port 80 on the container.

Open up a web browser to see that the nginx session is running. (Note we started the process interactivly therefore you will not see anything in the console)

image

But by using the command with –d attribute you can run it in the background.

sudo docker run –name docker-nginx -p 80:80 -d nginx

Okay, so now we have the container running externally on port 80. So let us setup a load balancing vServer which will map externally on the ubuntu host to port 81. In order to setup a load balacer we need first to get IP address of the container, the nginx container image does not have ssh so the simplest way is to use the

sudo docker exec -it containerid ip addr (command)

image

Now that we now the IP address of the container (Which is 172.17.0.3) We can now configure the CPX load balancing parameters.

cli_script.sh «add service db1 172.17.0.3 HTTP 80»

cli_script.sh «add lb vserver cpx-vip HTTP 172.17.0.4 81»

cli_script.sh «bind lb vserver cpx-vip db1»

image

Eureka!

Notice also that this vServer is now exposed using port 81, but that is on the network which the docker bridge is on. So the simplest way is to add another NAT rule to the IPtables which will redirect the traffic to that container port

iptables -t nat -A PREROUTING -p tcp -m addrtype –dst-type LOCAL -m tcp –dport 50000 -j DNAT –to-destination 172.17.0.4:81

And eureka!

image

So did you lose overview? The simplest way is to show it in a visio drawning

image

I spun up a container on the nginx image which I mapped externally on port 80. Then I setup a CPX added a load balancing vserver which reponds on port 81. Since the CPX did not have port 81 mapped in the docker setup I needed to add an IP tables rule which mapped the virtual server port 81 externally to port 50000. So when I opened up the browser against the external IP on port 50000 I then get the web frontend from the Nginx server via the NetScaler CPX

NetScaler Management and Analytics Systems

So earlier today, actually two hours after I was done at work and was walking my dog, Citrix actually released a tech preview of their next-generation Insight and Command Center (Yes! they merged it) into a new product which is now in tech preview which is now called NetScaler Management and Analytics Systems which is essentially a combination of NetScaler Insight Center and Command Center.

NOTE: The NetScaler CPX documentation can be found here –> http://docs.citrix.com/en-us/netscaler-cpx/11-1.html

So from an architectual overview it now looks like something like this.

image

We can now pull info metrics from each instance using NITRO API, it also uses SNMP as command center does to do more monitoring based upon SNMP traps, it also does AppFlow monitoring. One thing that is missing is that CPX does not have AppFlow support as of now. But we can monitor it using NITRO API from the management console.

Also the Orchestration module now supports Mesos, Nuage and Infoblox in addition to the existing OpenStack support it used to have.

So how does it look like?

Well first of the documentation is missing so still some loose threads that need a bit more digging into, for instance during deployment we have a lot of options besides those that were part of Insight Center

image

But after adding an appliance to the MAS server it will create some nice charts on on appliance that is added. Based upon CPU, Disk, Memory and status of the LB virtual servers that is part of the appliance.

image

UMS2

We also get “instant” overview of the traffic going trough the appliance, and a bit more info in regards to certificate monitoring.

UMS3

The Insight part is still the same but it contains the latest modules which is Gateway and Security insight –> image

And as I mentioned under Orchestration I can add a connection to Apache Mesos, and from here I can provision CPX instances.

image

Citrix also added a better overview to view different applications

image

Now a cool thing here is that we can also make our own Application Groups, for instance where we can group based upon a specific tenant or application group consist of different applications.

image

Then we can view the state of that particular application group

image

Now lastly, Citrix introduced something called Stylebooks, which is a template based provisioning solution. Where we can add our own stylebooks based upon an XML template

image

Nutanix & Citrix better together with Shadow Clones

So alot of problems that have been with using MCS versus PVS is the load that it puts on the Storages fabric. With the way MCS works we have an golden image which is our master template, when we createa machine catalog or update a machine catalog we have the option to choose a specific snapshot or if we don’t MCS will create a light snapshot for us. Then based upon that snapshot it will copy that snapshot to the storage repository which we have defined in Citrix studio.

Under a folder called MCS-basedisk. Then for each virtual machine that is created we have two additional disks. One Identity disk and one Differecial disk where all writes will be stored. So how is this going to be from a storage perspective?

image

That MCS base disk will be marked as read-only and will be linked together with the differential disk. Because a differential disk is only a virtual disk you use to isolate changes to a virtual hard disk or the guest operating system by storing them in a separate file.

So for all virtual machine that boot up in that machine catalog is going to be dependant on that MCS-baseddisk. And with Nutanix there could be an issue because, Nutanix is using data locality. Meaning that virtual machine will be served locally on the host they reside on (Where that is possible of course) This is to ensure scaleability and having no bottlenecks for the virtual machines.

And also with a Nutanix platform, we have so called Container which maps up as datastores (In VMware) for a virtual machine. So this will still work pretty well if everything is hosted on the same physical host

image

Since the MCS-basedisk is located on the same host as the Machine catalog clones the read operations will happen locally on that particular host. But this is not realistics and in all cases we have multiple hosts, where the clones are scattered across different hosts.

image

Now because of the data locality rule the vdisk (Which is the MCS-baseddisk) will be stored on one of the hosts. All the other clones with their differencial disks which is linked to the MCS-basedisk will generate alot of read-traffic across the network because of this. This is from a Citrix persepctive because it is in theory the same datastore.

But as I mentioned earlier, this COULD be a problem but Nutanix has already placed a solution in place for it, which is called Shadow Clones. This allows the storage fabric to cache an vdisk or VM data which is in a multi-reader scenario. Meaning in this case all the clones which needs to read from the MCS-basedisk. In this scenario the CVMs will monitor read activity across the whole fabric. Once the disk has been marked as immutable, the vDisk can then be cached locally by each CVM making read requests to its cached version of the MCS base-disk (aka Shadow Clones of the base vDisk)

image

NOTE:  One thing to be aware of is that the data will only be migrated on a read request as to not flood the network and allow for efficient cache utilization

In the case where the Base MCS Virtual machine is modified/updated, the Shadow Clones will be dropped and the process will start over.
But this feature will allow us to still leverage data locality and remove any constraints from the network. Now the feature is enabled by default (You can see this by using this command against the CVM

ncli cluster get-params|grep -i shadow

image

So this is the one of the cool features which makes MCS better to use on Nutanix, also noticed that there might be more to come here as well!