A Closer look at AVI networks

Now since I work alot with NetScaler and spend to much time on social media these days, I am bound to see another product which sparks my interest.

This is where AVINetworks popped up into my view. (Well was kinda hard not to notice it)

So what do they do? They deliver an ADC (or even better Cloud Delivery Platform) product which is software-only, and is aimed at the next-generation services (Containers, Microservices) which looks to be their main focus.

Their architecture is pretty simple, we have an AVI Controller which has the monitoring, analytics and management of the different service engines which actually deliver all the load-balancing features. So using the controller we define a load balanced service and the AVI Controller (If it has access) will deploy a service engine to service that service for the end-users. and note using the connectors or the CLI it is easy to automate deployment of new services for instance even from a development standpoint.

As of now, they state any cloud but it is limited to VMware ESX, OpenStack and Amazon Web Services. Now their product seemed interested so I decided to give it a try on our ESXi enviroment.

The setup is a simple OVA template which deployes the AVI Controller –> (Can be downloaded from here –> http://kb.avinetworks.com/try/

After the deployment is done you get to the main dashboard

image

So let’s setup a new virtual service, setup a simple IIS load balancing VIP, using the default port and HTTP profile and TCP profile.
image

Note that I can custom create a TCP Profie with custom TCP parameters, and I can enable front-end optimization, caching and other X-forwarded for rules under the application profile.

Now I need to create a server pool. Which consists of the port, load balancing rules, persistency and can also use AutoScale rules.

image

After I have added my servers and defined the virtual network it should attach to I can go ahead with the service creation. From here I can add HTTP rules

image

Under Rules I can define different HTTP request policies to modify header and so on.

image

Next I define the analytics part, and activate real time metrics. This is something that I think seperates an ADC from an load balacer which is the insight!

image

Then the advanced part, here I can define performance limitations and weight and so on.

image

When I am done with the configuration I click Save, then I get to this dashboard, Hello gorgeous!

image

Now what is happening in the background now Is that the AVI controller is deploying an Service Engine OVA template to my ESX hosts.

image

Which is connected to my internal VM network, when the service engine is done deploying the health score is set to 100

image

Now when I start to generate some traffic against the VIP I can see in real-time what is going on, and how long the applicaiton itself takes to respond to the response.
Now this is vaulable insight! I can see that my internal network is not the bottleneck, neither is the client but the application itself is spending to much time. I can see how many connections and the amout of troughput being generated.

image

If I go into security I can see if I have any ongoing attacks or what level of security I have in my network. Need to get some more details on what kind of attacks that will be detected in this overview.

image

Just for the fun of it, I used LOIC to spam the VIP with HTTP GET requests and se if I could trigger something, but it didn’t, but however if I looked into the log I could see that I get all the information I want from within the dashboard

image

I can basically filter upon anything I want. Now if I go back to the dashboard, I can see the flow between the service engine, vip and the server pool that it is attached to

image

Another cool feature is the ability to scale-out or scale-in if needed. Let us say that we can see that the Service Engine is becoming a bottleneck, then we can just go into the service and choose scale-out

image

When we go back to the dashboard now we can see that we have two service engines servicing this VIP

image

Now the cool thing is that we can set AVI to autoscale if needed, let’s say that one of the service engines are becoming a bottleneck it will trigger a CPU alert which would then create another service engine (IF the AVIController has write access to the virtual enviroment)

Now in terms of load balancing between mulitple service engines, it uses GARP on the primary Sevice engine where most of the traffic which be proccesed. Excess traffic is then forwarded at layer 2 to the MAC of the second SE and then the second SE changes the source IP address of the connection and is then bypassing the primary SE on the way back to the client.

So far I like what I see, this is another approach to the tradisional ADC delivery method where everything is in a single appliance, so stay tuned for more!

#adc, #avi-networks, #load-balancing

Netscaler and DDoS

A part of many Network admins day to day tasks involves mitigating DDoS attacks. And they come in many types of shape and size.
But they are share a common goal, disrupting the service for the users. These types of attacks make the service unresponsive and therefore cannot service the regular users who actually need to access the service. Trough out the years there have been many DDoS attacks on many of the HUGE online services.

For instance PayPal, Visa many online banks (Such as DNB in Norway) have suffered of these kinds of attacks, and if you think about it what happens if an online bank is offline ? The business loses a lot of income and the regular users cannot access their online bank account.

Now back to the kind of DDoS attacks. The most common ones are

SYN Flood:
Happens when a host sends a flood of TCP/SYN packets, which are often from an forged address. Each of these packets is handled like a connection request, causing the server to spawn an half-open connection. This is actually just a simple exploit of how TCP connections are established.
I like to think of it as an old lady (who is in disguise) to gives a bag to the server and says (can you hold this bag for me ? ) and of course the server is happy to oblige and therefore holds the bag and then the old lady runs of, and the server is standing there with the bag yelling (“Old lady?”) and then again there comes another old lady (in disguise) which comes with another bag which again the server is happy to help out and again is stuck with two bags.
And as you can see it is only a matter of time before the server cannot hold anymore bags.

ICMP Flood:
Which again is split up in more different types.
The primary with these kinds of attacks is that it uses ICMP. Now the Ping command is pretty simple when run it asks a server, “Are you alive?” and the server says “yes” If you have thousands upon thousands of these kinds of requests they can quickly use up much of the network bandwidth at the server.

Smurf Attack (which is an ICMP flood Attack)
Is another type of attack (Which is usually used where the network isn’t configured correctly) What happens is that a attackers, sends a spoofed IP address ping to an broadcast address in a network, and the reply to address is set to a server address. What happens is that all the clients in that subnet (which are alive and gets the ping request, will repond to the ICMP packet to the server)
These kinds of attacks are usually easy mitigated at the network.
For instance with Cisco you can set a pretty ACL to limit the ICMP traffic

config t
Access-list 100 permit icmp any {your network} {your subnet} echo-reply
Access-list 100 permit icmp any (your Network) (your Subnet) echo
Interface e1
Rate-limit input access-group 100 512000 8000 8000 conform action transmit exceed action drop

Or what you should do is use the command no direct ip-broadcast

Now these 2 are the most common types of low-layer attacks. There are a bunch of layer 7 attacks I will discuss in a later post.
So how does Netscaler come into the picture ?

SYN Flood:

A NetScaler appliance defends against SYN flood attacks by using SYN cookies instead of maintaining half-open connections on the system memory stack. The appliance sends a cookie to each client that requests a TCP connection, but it does not maintain the states of half-open connections. Instead, the appliance allocates system memory for a connection only upon receiving the final ACK packet, or, for HTTP traffic, upon receiving an HTTP request. This prevents SYN attacks and allows normal TCP communications with legitimate clients to continue uninterrupted.

SYN DoS protection on  NetScaler appliance requires no external configuration. It is enabled by default.

ICMP Flood:

The NetScaler also protects network resources from ICMP based attacks by using ICMP rate limiting and aggressive ICMP packet inspection. It performs strong IP reassembly, drops a variety of suspicious and malformed packets, and applies Access Control Lists (ACLs) to site traffic for further protection.
Now if you type sh ns ratecontrol

image

You can see the allowed ICMP packets per MS. 100 is the default value. And by default there are no rate control set on UDP and TCP.

#adc, #ddos, #icmp, #netscaler, #tcp-syn-flood

Netscaler 101

The last couple of days I’ve seen a lot of traffic on my blog regarding the posts on Netscaler ( And I don’t have so many of them!) And with the recent events regarding Cisco ACE and Microsoft Forefront TMG, I’m guessing that a lot of people are looking into the option to switch over to Citrix.
Cisco has always been huge in the networking market, but in the ADC (Application Delivery Controller) market they have never gotten the huge market share that they were hoping for, therefore a couple of weeks ago they decided to stop further development of their ACE product. And in similar events Microsoft decided to stop further development on their TMG product. TMG is not the same product like Netscaler/ACE/BIG-IP thou it has a lot of the same functions and features.

So back to Netscaler what can it offer:
* Advanced load balancing
* Content and app caching
* Database load balancing
* Application Firewall
* Secure Remote Access
* Advanced server offload
* Application acceleration
* Integration with Citrix
      * Access Gateway features
      * Web interface
* Scale up and Scale Out features

You can read more about the different features here –>
http://www.citrix.com/products/netscaler-application-delivery-controller/features.html

Now the Netscaler product comes in 3 Different versions.

MPX: Which is the hardware appliance, is again split up into different models,
http://www.citrix.com/products/netscaler-application-delivery-controller/features/platforms/mpx.html
As you can see most of the models here have a “pay-as-you-grow” options, so for instance if you buy a MPX 7500 and your company is growing and you need more throughput you can upgrade your 7500 to a 9500. So it’s the same hardware as before you just “unlock” more features.
You can see all the different models and features here –> http://www.citrix.com/content/dam/citrix/en_us/documents/products/netscalerdatasheetaugust2012.pdf

VPX: Is a software based virtual appliance, which is available for Hyper-V, VMware and XenServer.
http://www.citrix.com/products/netscaler-application-delivery-controller/features/platforms/vpx.html
Here as well you have a “pay-as-you-grow” solution so you can upgrade it if you need more throughput, the downside to using a VPX is that it does not have  hardware based SSL acceleration (which the MPX has), which allows for a lot less SSL connections.

SDX: Is the best of both worlds. It is a hardware appliance like the MPX but in also has capabilities of running VPX. So it’s a piece of hardware which basically runs a stripped down XenServer which allows to run multiple VPX inside. And since this piece of hardware has SSL acceleration capabilities it does not have downside of a regular VPX. It allows for up to 40 VPX’s and that will allow for true multi-tenancy.
You also have the “pay-as-you-grow” option here.
http://www.citrix.com/products/netscaler-application-delivery-controller/features/platforms/sdx.html

Also Netscaler comes in 3 Different editions (Like most of Citrix products)
You can see the different editions and their limitations in this datasheet
http://www.citrix.com/content/dam/citrix/en_us/documents/products/netscalerdatasheetaugust2012.pdf

A summary,
Standard = Use for Load-balancing (Web and DB) also has Citrix Web interface and TCP optimization
Enterprise = For more advanced features – cloud bridge, edgesight for netscaler, branch repeater client.
Platinum = Includes all the features.

So what do I need for my organization ?
Well first of you need to figure out what your needs are.
1: Do I need just the load balancing for my Web-servers?
2: SSL VPN solution and/or SSL offloading?
3: Advanced Web load-balancing and caching and optimization?
4: Multi-tenancy solution ?
5: DDos defenses ? Or do I have a firewall in front which is fully capable ?
6: Just for my Citrix pieces (Access Gateway and Web interface)?
7: SQL load-balancing?
8: How many users do I have?

You also need to calculate the bandwidth usage the service you are going to load-balance, most of the products (for instance Lync) has well documented traffic usage for each feature.
Let’s take an example if I am a small business that just needs to load-balance my 2 webservers for my internal users (and I have 100 of them) the smallest VPX would suffice.
If I am a enterprise service provider and I offer fully multitenancy solution where customers can setup LB for all their services I would recommend a SDX (The best solution regarding version is to start with the lowest system you think you need and upgrade when you need to grow)

So after you have chosen the model (remember that you always need two of them, since if you only have 1 you have a single point of failure). The next part is setting up the device.
Remember that a Netscaler operating system consists of two parts.
1: Part is FreeBSD (The Appliance uses this part for booting and for logging)
2: Part is the core os (NSOS NetscalerOS) Which controls the traffic in / out of the appliance.

When a appliance boots, it will get system image from the flash and decompress and put it into the ram. The config file is also fetched from the flash and put into the ram. (Which is know as the running-config)
(You can show the running-config from CLI by running the command, show ns runningconfig if you want to see the saved config you can run the command show ns.conf )
You can access it either via a console (serial cable or console via the hypervisor )

And remember that you can save at anytime by running the command save ns config, if you screwed up you can restart the Netscaler (if you didn’t save your config)

But when you start the NS appliance the first thing you see is that it asks for an IP (Which is known as the NSIP Netscaler IP) Which is used for management purposes and clustering. You also enter a subnet mask and a gateway.

image

After that you can save and quit the config menu and you can now access the appliance via a webconsole. You can also see more info regarding the interface by running the command show ns ip 10.0.0.2

image

As you can see here it says that “Management Access is enabled” and FTP, Telnet, SSH and GUI is enabled.
So we should disable the insecure access methods before we continue. By running the commands set ns ip 10.0.0.2 –telnet disabled and same for FTP
image

And there are other things we should configure as well, change the default password for the user “nsroot
You can do this by running the command config system user nsroot PASSWORD (something very very safe)
image

Also you SHOULD enable NTP sync with a authorized ntp server.
add ntp server IP –minpool integer –maxpool integer
enable ntp sync
image
Now we can log onto the Web GUI.  (Im using version 10 of the Netscaler VPX you can get a free trial for your hypervisor from citrix.com and might add that the web gui is much improved in V10)
image

The default username and password for the local system user on a netscaler is nsroot and nsroot
So after you have logged in you will come to main menu.
image

Its split up into 3 panes (Dashboard, Configuration and Reporting) and what you see here is the configuration pane.
If I go to the Dashboard, you see a lot of read-time information regarding well.. everything you want to see
I can choose if I wish to view SSL connections, TCP handshakes, HTTP traffic etc..

image

The reporting pane is just that, you can create reports and there are a bunch out of the box that we can view as well.
But most of the time we are going to be in the configuration pane.
Now what other things do we need to do in order to load balance a service?
First of we have to design how the netscaler should be placed in our infrastructure, most of the designs are based on
one-arm-mode or two-armed-mode.

In one-arm-mode the netscaler has ONE interface, and on that interface external traffic comes in and the inside traffic out on the same interface (traffic is split by using VLAN’s)
In two-arm-mode the netscaler has TWO interfaces, 1 for external traffic comes in and comes out and 1 for internal traffic. So this is the much more common deployment.

Now in both scenarios the traffic to the back-end servers are flowing as the following.

image
Now when the client connects to the web service as the virtual IP (90.90.90.90) The Netscaler (depending on the LB rules) make a connection to one of the servers which are connected to that virtual service with the Netscaler SNIP(Subnet IP)
The Subnet IP is an address that connects the netscaler to the servers in the backend, so you should have an SNIP address for each subnet you want to have services in.
So SOURCE IP —> VIRTUAL IP (NS) SNIP —-> WEB SERVER 10.0.0.4 (BASED ON LB) so for the web servers it will appear that the connections come from the same IP. And the same will go back to the clients
WEB SERVER –> SNIP (NS) VIRTUAL IP —> SOURCE IP, so for the clients all they see is that one IP address which may house loads of web servers.

Now is there a problem with this ?
Well yeah.. if you have a web server you probably want to have logging in place for the IP address of the client,  now you have the Netscaler option which known as use “Source IP mode”(USIP) which will allow for clients to do a direct connection with the backend servers.  But what is the downfall of this ?
1: TCP Multiplexing which allows for the netscaler appliance to have one connection to the webserver will be disabled when you use Source IP mode.
2: When backend servers see the source IP they will look at their default routing table instead of returning the traffic to the netscaler, so therefore the servers with go with the local gateway instead of the netscaler. When the backend servers try to connect to a TCP connection with the client, the client will drop connection since it is awaiting its response from the Netscaler VIP.
So in the case you use Source IP mode you need to set the default GW on the backend servers to point to the NS.

You can set USIP mode in modes.
image
Configuration –> Settings –> Configure Modes –> Use Source IP
image
Alternative enable ns mode usip
In case of logging we have another choice(  inject HTTP header option which allows the Netscaler to inject the source IP header into the http request which again allows logs on the webserver to contain the IP-address of the client. )
But in general I would recommend that you don’t use USIP.

Now lets setup a load-balancing configuration.
Before we continue remember that you need to setup at least 3 addresses on the NetScaler
1: NSIP
2: VIP
3: SNIP or MIP

There are a few things we need to find out before we can setup LB, what kind of service to we need to load-balance and what servers are hosting this service. And we need to setup a monitor towards that service as well, this monitor check is the service in the backend is responding on that server, if one server is not responding for a particular service it is taken out of the LB queue. So we need.
1: Servers (The list of servers that have a particular service running
2: Service (What kind of service is it ? Webhosting port 80?
3: Monitors (Checks if the service on the server is responding if not it is taken out of the LB-queue until it start responding again)
4: Virtual IP (a virtual IP address which the Netscaler will respond to)
All this is added together and it creates a load balanced service on a virtual Ip address which consists of the servers in the server list.

So lets go ahead and create a LB service. First we add a VIP and a SNIP
image
Go to configuration pane –> IPs and add a IP address. Remember that a VIP is the ip address that the end users are going to connect to, the SNIP is a ip which the netscaler uses to connect to the servers in the backend.
After that go to the load-balancing pane further down below.
Go to servers and add the servers that has a service.
(Remember that this is just a list of servers, you don’t define the services here)
image

After that go to monitors –>
As you can see the HTTP monitor is enabled by default
This does a HTTP HEAD command, and if it is working as it should you should get a code 200 response.
You can see this by opening the http monitor
image
After that we add the service,
We add a service that runs on port 80 on one server and add the HTTP monitor. (Remember to add this for both servers) And have a very descript full name each service on each server.)

image

Now that we have both services on both servers it should look like this
(In my case I don’t have any hosts on these IP addresses yet so therefore they are stated as Down) because the monitor is trying to do http request on them.

image

Now at alas we will add the virtual server that will point to the http server on these 2 servers in the backend. Go to Load balancing and virtual server –>
image

Remember to add both of the services on those servers (If you wish to load balance differently for instance it you have a more power on one of the server you can alter the weight on that server to 2, then this server will take twice the load)
You can also go to method and persistence to change how the service is load balanced. By default it is set to “least connection” that server with least connections will get the next connection, this will happen until they are even. You can also specify persistence (This will define if a client should talk with the same server it spoke with earlier) the most typical choice here is cookie insert for web services. But we will leave it at the default.

image

Now I’ve added a HTTP server with actually responds to HTTP
image

You can see that is responds to http request if I open a browser to IP 10.0.0.26
And if you are like me and would like to do it via the CLI you can do this.
Run the command add service servername ip http portname

image

Next we need to add the services to a virtual IP. (that will do the load balancing )
first we do a add lb vs servicename http ip 80
then we bind the services to that virtual ip
bind lb vs servicename serviceserver

image

After that you can do a

sh lb vs v1 to show that if the load balancing is active


 

Phuh! long post, next one will be regarding setting up a cluster on Netscaler, since you would always need 2 x Netscalers so you don’t have a single point of failure. And we are going to integrate authentication with LDAP.
Now I would also recommend that some user look at the command reference sheet from Citrix eDocs
http://support.citrix.com/servlet/KbServlet/download/20679-102-665857/NS-CommandReference-Guide.pdf

#access-gateway, #adc, #ban_nap_ho_ga_thanh_an, #citrix, #hwlb, #nap_ho_ga_thanh_an, #netscaler, #xenapp

Microsoft Private Cloud and Application Delivery Controllers

An import issue to adress  in a private cloud setup is setup of HA «high availability». There a multiple key components that make up a cloud service, and all of the core components need to have HA because if  one of the core components go down, your cloud goes down. C

The network must be designed properly in order to address the traffic the cloud service will generate. For instance if you have a big service like Facebook or Linkedin you need to have a proper network design in place to be sure that the solution won’t «kneel» on the first day because of the traffic. (Either it is regular requests to the site or because of an DDOS attack)
And as a part of that design you need ADC.

Of  course when you connect to a public service like facebook.com you don’t go directly to a webserver.  A typical deployment for a service (with HA would look like this)
End-user ————–> Internet ———-> Firewall -> ADC -> Pool of web servers.

An ADC can be described as an next generation load balancers.
They include features such as, compression, caching, ssl offloading, content switching and load balacing. There are of course other options as well (Some are different for each product, but these are the common criteria for an ADC)

The largest ADC products in the market are F5 BIG-IP and Citrix Netscaler.
(According to Gartner 2010)

And many of the largest web companies in the world use Netscaler or BIG-IP ADC’s
Like Facebook, Bank of America uses BIG-IP according to netcraft.com and sites like Visa use Netscaler.

(Of course if you wish to try out some of the features in these products, both of them offers virtual appliances that can be run within a hypervisor with some limitations)
F5 also has a nifty flash to show many of the features within a ADC and how they work -> http://www.f5.com/flash/product-demo/

But back to the cloud, when deploying new services in the cloud you can automate much of this with SCVMM 2012 out-of-the-box.
* Automate the deployment of new service.
* Installing the operating system / applying security updates on a virtual machine
* Installing the application or server roles (Terminal server / web server )
* Configure which users have access to the service, so on and so forth.

But of course this will only get you so far, if you have an ADC between your firewall (Which is connected to the internet) and your infrastructure you would need to make some settings on the ADC as well in order to deploy the service properly.

Microsoft has seen the value of working together with the ADC vendors, and because of this you can integrate your ADC’s into SCVMM and with it fully automate your service deployment. As of today there are 3 «connectors» avaliable.
BIG-IP -> https://devcentral.f5.com/tabid/1082224/Default.aspx?returnurl=%2fLinkClick.aspx%3flink%3dhttp%3a%2f%2fdevcentral.f5.com%2fdownloads%2fplugins%2fF5LoadBalancerPowerShellSetup-214-x64.zip%26tabid%3d73%26mid%3d3221

Citrix Netscaler -> http://community.citrix.com/display/ns/Citrix+NetScaler+LB+Provider+for+Microsoft+System+Center+Virtual+Machine+Manager+2012

Brocade -> http://www.brocade.com/partnerships/technology-alliance-partners/partner-details/microsoft/microsoft-systems-center/index.page

Im going to walk trough the deployment of Netscaler connector within SCVMM 2012. And how you can further use this when creating templates.

First of install the connector from the site. Click next, next and install.
1

After you have installed the connector you need to restart the virtual machine service.
(Just open it from services.msc)
Then it should appear under Configuration Providers
2

Before we can use it, we need to add it as a Load balancer,

3

From there you need to create a runas account which has access to the netscaler, and has access add LB rules.
image
Then you need to choose which host group this LB will be active for, then choose the manufacturer and model.

Then enter the IP address and port for the Netscaler device.
image

Now under Provider we check if the system has access to the device.
image

The system will try to perform basic functions on the device like
* Retrieve LBsysteminfo
* Open LBConnection
* Close LBConnection
* Retrieve LBknownVIP
* And so on..

Afther that is complete you can click complete. Now that the Load Balancer is in place and is configured correctly with access we must create a VIP template.
A VIP template contains a configuration setting for a hardware load-balancer for a specific type of network traffic. For instance, you could create a template that specifies the load balancing behavior for HTTPS traffic on a specific load balancer.

In this example we are going to create a VIP template for https traffic where the SSL is going to be terminated at the load-balancer

So give the template a name and define what the VIP port is going to be (since https is over port 443 I enter that)
image

Next I choose what type of load-balancer I wish to use

image

Click next, now we have to define which Protocol we are going to load-balance, and if we wish to terminate the https connection at the load-balancers.
We also need to enter a Certificate subject name here. For instance C=US,ST=WA,L=Redmond,O=Contoso,OU=Test,CN=www.contoso.com/emailAddress=contoso@contoso.com.
image

Click next,
Here we change the settings for Persistance, for instance if someone has the SSL session ID of = 12325345345 and has visited WEBSERV1 before then the user be routed back to that server.
image

Click next –>
Now we choose what kind of Load balancing method we are going to use, im going to stick with «Least Connections” since my web servers are equal in terms of hardware.

image

And last but not least Health Monitors.
Health monitors are in place to check if the servers in the back actually are alive and responding.
You can for instance add a GET / in the request box and type 200 under reponse (Which is the status for OK in HTTP)  and the device will perform a HTTP GET on each server so see if they are alive and well.

image

Click next then finish!
After this is done you can use this template in any service template deployment (I will get back to that in a later post)

#adc, #big-ip, #citrix, #microsoft-cloud, #netscaler, #private-cloud, #scvmm