Now since I work alot with NetScaler and spend to much time on social media these days, I am bound to see another product which sparks my interest.
This is where AVINetworks popped up into my view. (Well was kinda hard not to notice it)
So what do they do? They deliver an ADC (or even better Cloud Delivery Platform) product which is software-only, and is aimed at the next-generation services (Containers, Microservices) which looks to be their main focus.
Their architecture is pretty simple, we have an AVI Controller which has the monitoring, analytics and management of the different service engines which actually deliver all the load-balancing features. So using the controller we define a load balanced service and the AVI Controller (If it has access) will deploy a service engine to service that service for the end-users. and note using the connectors or the CLI it is easy to automate deployment of new services for instance even from a development standpoint.
As of now, they state any cloud but it is limited to VMware ESX, OpenStack and Amazon Web Services. Now their product seemed interested so I decided to give it a try on our ESXi enviroment.
The setup is a simple OVA template which deployes the AVI Controller –> (Can be downloaded from here –> http://kb.avinetworks.com/try/
After the deployment is done you get to the main dashboard
Note that I can custom create a TCP Profie with custom TCP parameters, and I can enable front-end optimization, caching and other X-forwarded for rules under the application profile.
Now I need to create a server pool. Which consists of the port, load balancing rules, persistency and can also use AutoScale rules.
After I have added my servers and defined the virtual network it should attach to I can go ahead with the service creation. From here I can add HTTP rules
Under Rules I can define different HTTP request policies to modify header and so on.
Next I define the analytics part, and activate real time metrics. This is something that I think seperates an ADC from an load balacer which is the insight!
Then the advanced part, here I can define performance limitations and weight and so on.
When I am done with the configuration I click Save, then I get to this dashboard, Hello gorgeous!
Now what is happening in the background now Is that the AVI controller is deploying an Service Engine OVA template to my ESX hosts.
Which is connected to my internal VM network, when the service engine is done deploying the health score is set to 100
Now when I start to generate some traffic against the VIP I can see in real-time what is going on, and how long the applicaiton itself takes to respond to the response.
Now this is vaulable insight! I can see that my internal network is not the bottleneck, neither is the client but the application itself is spending to much time. I can see how many connections and the amout of troughput being generated.
If I go into security I can see if I have any ongoing attacks or what level of security I have in my network. Need to get some more details on what kind of attacks that will be detected in this overview.
Just for the fun of it, I used LOIC to spam the VIP with HTTP GET requests and se if I could trigger something, but it didn’t, but however if I looked into the log I could see that I get all the information I want from within the dashboard
I can basically filter upon anything I want. Now if I go back to the dashboard, I can see the flow between the service engine, vip and the server pool that it is attached to
Another cool feature is the ability to scale-out or scale-in if needed. Let us say that we can see that the Service Engine is becoming a bottleneck, then we can just go into the service and choose scale-out
When we go back to the dashboard now we can see that we have two service engines servicing this VIP
Now the cool thing is that we can set AVI to autoscale if needed, let’s say that one of the service engines are becoming a bottleneck it will trigger a CPU alert which would then create another service engine (IF the AVIController has write access to the virtual enviroment)
Now in terms of load balancing between mulitple service engines, it uses GARP on the primary Sevice engine where most of the traffic which be proccesed. Excess traffic is then forwarded at layer 2 to the MAC of the second SE and then the second SE changes the source IP address of the connection and is then bypassing the primary SE on the way back to the client.
So far I like what I see, this is another approach to the tradisional ADC delivery method where everything is in a single appliance, so stay tuned for more!