Software-defined networking difference between VXLAN and NVGRE

Myself being quite in the starting phase of software-defined networking and all the different network virtuliazation technologies out there, I thought I would do a summurization between the largest different vendors in this market. What differenciates them (from a protocol perspective) and why on earth would we use them ?

First of, network virtualization is not new it has been around for a long time. Since we started with computer virtualization and had some sort of networking capabilities, but to extend this capabilities required something more. We started out with

  • Virtual Network adapters and dummy switches

And then we moved along into more cool stuff like

  • Virtual VLAN
  • Managed L2 Switches virtually
  • Firewall and load balancing capabilities
  • Virtuall routing capabilities and virtual routing tables

Now in the later years came VXLAN and NVGRE (which are two different tunneling protocols) which was primarly aimed at the scaleability issues with large cloud computing platforms and also with the problems with STP and using a large number of disabled links. Such as VLAN issues and overlapping IP-address segments, and that management should be a part of the virtualization layer and not seperate.

VXLAN

VXLAN (Part of NSX) is in essence a tunneling protocol which wraps layer 2 on layer 3 network. Where a network is split into different segment and only VMs within the same VXLAN segment can communicate with each other. This segment has its own 24-bits segment ID. VXLAN uses IP Multicast to deliver bcast/mcast/unknown destination VM Mac addresses to all access switches participating in a given VXLAN.

In a tradisional VLAN packet it would look like this

Using VXLAN we wrap the Ethernet packet within UDP packet, so first we have the inner (Original) Ethernet header

So using VXLAN addes another 50 bytes of additional overhead for the Protocol. Which in essence means that it will the standard MTU over 1500. There is a tech post from Vmware which stats that the MTU should be adjusted to 1600 MTU, but you should rather consider Jumbo frames http://www.vmware.com/files/pdf/techpaper/VMware-vSphere-VXLAN-Perf.pdf

So it gives more overhead and all packets need to wrapped out of the VXLAN before being sent to the other VM. This also makes an issue when sending small packets such as Telnet/SSH which transmits a packet for eac keystroke which will see a large amount of overhead for each packet, even thou it not a very common workload.

In order to allow communication between a VXLAN enabled host and a non enabled VXLAN host you need a VXLAN capable device in between which acts as a gateway.

Now a nice thing about VXLAN is that there is coming more and more support for VXLAN enabled devices, and so using VXLAN in our cloud infrastructure we can define access and management from the virtualization layer and move all VXLAN traffic over just one transport VLAN.

NVGRE

NVGRE on the other hand is primarly a tunneling protocol that Microsoft is pushing, which uses GRE to tunnel L2 packets across an IP fabric. Which uses a 24 bits of the GRE to identity the network ID

The positive thing about using GRE is that many existing hardware already has full support for GRE (Hence switching and nic offloading) but on the other hand wrapping L2 packets within a GRE layer will not allow regular features like firewalls or load balancers be able to “see” the packets unlike with UDP. So therefore the load balancers / firewall would need to act as a Gateway and remove the GRE wrapper in order to do packet inspection.

For instance in Windows Server 2016 TP4 it includes its own load balancing and firewall capabilities to be able to do this without unwrapping the packets. Here are some features that are included in TP4

Network Function Virtualization (NFV). In today’s software defined datacenters, network functions that are being performed by hardware appliances (such as load balancers, firewalls, routers, switches, and so on) are increasingly being deployed as virtual appliances. This “network function virtualization” is a natural progression of server virtualization and network virtualization. Virtual appliances are quickly emerging and creating a brand new market. They continue to generate interest and gain momentum in both virtualization platforms and cloud services. The following NFV technologies are available in Windows Server 2016 Technical Preview.

  • Software Load Balancer (SLB) and Network Address Translation (NAT). The north-south and east-west layer 4 load balancer and NAT enhances throughput by supporting Direct Server Return, with which the return network traffic can bypass the Load Balancing multiplexer.

  • Datacenter Firewall. This distributed firewall provides granular access control lists (ACLs), enabling you to apply firewall policies at the VM interface level or at the subnet level.

  • RAS Gateway. You can use RAS Gateway for routing traffic between virtual networks and physical networks; specifically, you can deploy site-to-site IPsec or Generic Routing Encapsulation (GRE) VPN gateways and forwarding gateways. In addition, M+N redundancy of gateways is supported, and Border Gateway Protocol (BGP) provides dynamic routing between networks for all gateway scenarios (site-to-site, GRE, and forwarding).

The future

It might be that both of these prococols will be replaced by another tunneling protocol called Geneve which is a cojoint effort by Intel, Vmware, Microsoft and Red Hat –-> http://tools.ietf.org/html/draft-gross-geneve-00#ref-I-D.ietf-nvo3-dataplane-requirements which in my eyes look alot like VXLAN using UDP wrapping protocol.

Either way the tunneling protocol that be used needs to be properly adopted by the management layer in order to integrated with the computing virtualization layer to ensure that traffic policies and security management are in place.

#nvgre, #vxlan

Software-defined networking difference between VXLAN and NVGRE

Myself being quite in the starting phase of software-defined networking and all the different network virtuliazation technologies out there, I thought I would do a summurization between the largest different vendors in this market. What differenciates them (from a protocol perspective) and why on earth would we use them ?

First of, network virtualization is not new it has been around for a long time. Since we started with computer virtualization and had some sort of networking capabilities, but to extend this capabilities required something more. We started out with

  • Virtual Network adapters and dummy switches

And then we moved along into more cool stuff like

  • Virtual VLAN
  • Managed L2 Switches virtually
  • Firewall and load balancing capabilities
  • Virtuall routing capabilities and virtual routing tables

Now in the later years came VXLAN and NVGRE (which are two different tunneling protocols) which was primarly aimed at the scaleability issues with large cloud computing platforms and also with the problems with STP and using a large number of disabled links. Such as VLAN issues and overlapping IP-address segments, and that management should be a part of the virtualization layer and not seperate.

VXLAN

VXLAN (Part of NSX) is in essence a tunneling protocol which wraps layer 2 on layer 3 network. Where a network is split into different segment and only VMs within the same VXLAN segment can communicate with each other. This segment has its own 24-bits segment ID. VXLAN uses IP Multicast to deliver bcast/mcast/unknown destination VM Mac addresses to all access switches participating in a given VXLAN.

In a tradisional VLAN packet it would look like this

Using VXLAN we wrap the Ethernet packet within UDP packet, so first we have the inner (Original) Ethernet header

So using VXLAN addes another 50 bytes of additional overhead for the Protocol. Which in essence means that it will the standard MTU over 1500. There is a tech post from Vmware which stats that the MTU should be adjusted to 1600 MTU, but you should rather consider Jumbo frames http://www.vmware.com/files/pdf/techpaper/VMware-vSphere-VXLAN-Perf.pdf

So it gives more overhead and all packets need to wrapped out of the VXLAN before being sent to the other VM. This also makes an issue when sending small packets such as Telnet/SSH which transmits a packet for eac keystroke which will see a large amount of overhead for each packet, even thou it not a very common workload.

In order to allow communication between a VXLAN enabled host and a non enabled VXLAN host you need a VXLAN capable device in between which acts as a gateway.

Now a nice thing about VXLAN is that there is coming more and more support for VXLAN enabled devices, and so using VXLAN in our cloud infrastructure we can define access and management from the virtualization layer and move all VXLAN traffic over just one transport VLAN.

NVGRE

NVGRE on the other hand is primarly a tunneling protocol that Microsoft is pushing, which uses GRE to tunnel L2 packets across an IP fabric. Which uses a 24 bits of the GRE to identity the network ID

The positive thing about using GRE is that many existing hardware already has full support for GRE (Hence switching and nic offloading) but on the other hand wrapping L2 packets within a GRE layer will not allow regular features like firewalls or load balancers be able to “see” the packets unlike with UDP. So therefore the load balancers / firewall would need to act as a Gateway and remove the GRE wrapper in order to do packet inspection.

For instance in Windows Server 2016 TP4 it includes its own load balancing and firewall capabilities to be able to do this without unwrapping the packets. Here are some features that are included in TP4

Network Function Virtualization (NFV). In today’s software defined datacenters, network functions that are being performed by hardware appliances (such as load balancers, firewalls, routers, switches, and so on) are increasingly being deployed as virtual appliances. This “network function virtualization” is a natural progression of server virtualization and network virtualization. Virtual appliances are quickly emerging and creating a brand new market. They continue to generate interest and gain momentum in both virtualization platforms and cloud services. The following NFV technologies are available in Windows Server 2016 Technical Preview.

  • Software Load Balancer (SLB) and Network Address Translation (NAT). The north-south and east-west layer 4 load balancer and NAT enhances throughput by supporting Direct Server Return, with which the return network traffic can bypass the Load Balancing multiplexer.

  • Datacenter Firewall. This distributed firewall provides granular access control lists (ACLs), enabling you to apply firewall policies at the VM interface level or at the subnet level.

  • RAS Gateway. You can use RAS Gateway for routing traffic between virtual networks and physical networks; specifically, you can deploy site-to-site IPsec or Generic Routing Encapsulation (GRE) VPN gateways and forwarding gateways. In addition, M+N redundancy of gateways is supported, and Border Gateway Protocol (BGP) provides dynamic routing between networks for all gateway scenarios (site-to-site, GRE, and forwarding).

The future

It might be that both of these prococols will be replaced by another tunneling protocol called Geneve which is a cojoint effort by Intel, Vmware, Microsoft and Red Hat –-> http://tools.ietf.org/html/draft-gross-geneve-00#ref-I-D.ietf-nvo3-dataplane-requirements which in my eyes look alot like VXLAN using UDP wrapping protocol.

Either way the tunneling protocol that be used needs to be properly adopted by the management layer in order to integrated with the computing virtualization layer to ensure that traffic policies and security management are in place.

#nvgre, #vxlan

Securing Hyper-V 2012R2 hosts and VMs

Microsoft has implemented a lot of new cool security features in Hyper-V on the 2012R2 release, and most importently statefull firewall and network inspection features.

From the 2012 release, Microsoft introduced features like
* ARP Guard https://msandbu.wordpress.com/2013/04/03/arp-guard-in-hyper-v-2012/
* DHCP Guard
* Router Guard
(These three functions are also included in regular network devices from most vendors)

image

The use of Bandwidth control as well is useful for limiting for instance DDOS attacks.
* Bitlocker with Network Unlock (To protect a VM from theft)
* NVGRE (Network virtualization, which is not a security feature but it can be used to define each customer to its own network segment without the use of VLANs (This offers security since it is not able for instance to use VLAN-hopping)
* PVLAN (In many cases the use of VLANS still has its purpose for instance you can define three types of PVLANs (Isolated, Promiscuous and Community)
* VM stateless firewalls (Not on the indvidual VM but on the Hyper-V traffic going to the VMs) But these had pretty limited functionality (Which was restricted to IP-ACL, couldn’t define port or TCP EST)
* Bitlocker for CSV (Encrypt everything in a cluster)

So what else has Microsoft implemented of Security mechanisms in the OS-stack with the new R2 release ?

Not much info here yet.. but they are mostly related to hyper-v networking rules, new generation VMs with UEFI boot options (UEFI enable secure boot which makes it harder for rootkits to get installed)
image

What else can you do to secure your hosts and VM*s running on Hyper-V?

Microsoft has released a built-in baseline configuration that you can start from Server Manager this has some rules that It can use to scan if your hosts are according to best-practice, this offers you tips on what you should do.

image

Microsoft also offers other tools that can be used deploy security according to best practice  (This uses Group Policy for deployment of security settings)  for instance Security Compliance Manager http://www.microsoft.com/en-us/download/details.aspx?displayLang=en&id=16776

image

Installing all Hyper-v hosts as Server Core will also limit the attack surface on the hosts since it does not install all the unnecessery components like Internet explorer, .Net framework etc.
Which makes the host less open for attacks. (And also don’t use RDP there have been many security holes here which hackers have taken advantage of so If you need to enable RDP use NLA as well)

Monitoring / Antivirus and Patching

Integration with System Center also can prove to be quite useful for many reasons.
Which can offer you features like
* Anti-malware / Anti-virus (Configuration Manager)
* Patch management (Virtual Machine Manager / Configuration Manager)
* Baselining and remediation (Configuration Manager / Virtual Machine Manager)
image
* Monitoring (Operations Manager)

But this will require a number of agents being installed on all VM’s for instance Configuration Manager with Endpoint Protection and Operations Manager (and VMM agent on Hyper-v hosts)
(NOTE: You can enable baseline configuration in Operations Manager as well, instead of using Server Manager and with the integration of System Center Advisor you will get more intel)

image

Now Microsoft recommends that the parent partition to be as clean as possible, therefore they recommend not installing AV on the Hyper-V hosts (Since you will also suffer some performance loss), but if it is a part of the company policy.
Remember that if you install endpoint protection for Hyper-V hosts, put exclusions for these folders.“%PROGRAMDATA%\Microsoft\Windows\Hyper-V”
C:\ClusterStorage
You can read more about it here –> http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-hosts.aspx

When regarding firewalls, each host running Windows has Windows Firewall enabled by default, should we then use Hyper-V port ACLs also ?
Hyper-V port ACLs follow the virtual machines so if you move them to another host, the ACL sticks. But they have different features.
The built-in firewall from Windows can allow Applications to communicate and is not restricted to a port or protcol, the firewall can also use IPsec.
While a Hyper-V port ACL can check if it is a statefull connection while the built-in firewall cannot. Hyper-V port ACL can also measure the traffic bandwidth that goes trough.
For many reasons you should use for built-in firewall for most cases (Create Group policies for the most common use server roles) and in more extreme cases where you need to lock down more and controll the traffic flow more you deploy and hyper-v port ACL.

You should also move your management traffic to a dedicated NIC outside of other traffic so it is not so easy to “sniff” on your traffic.

RBAC (Role Based Access Control) an easy rule of thumb is to split user rights where you can.
For instance an hyper-v administrator should not have admin-rights on VMs and vice versa.
If  you are using SCVMM you should create custom User Roles (For instance you can define a user role that (Group 1) has access to which can be used to administrate their hosts (Which is under a host group) and access to certain run as roles)

image

Sysinternals also should be used when evaluating your security for instance to see if there are any open ports that shouldn’t be open by using TCPView
http://technet.microsoft.com/en-US/sysinternals
image

Make sure that your internal network is configured as it should.
By disabling CDP on access ports (If you are using Cisco)
Enabling all ports as Access Ports (Portfast) so you can’t be hijacked by STP attacks.

image

Other resources:
http://www.microsoft.com/en-us/download/details.aspx?id=16650 This is an old security guide from Microsoft but alot of it still applies today.

Might also mention that there are some third party solutions that you can use to secure Hyper-V.

5-Nine –> http://www.5nine.com/
Watchguard –> http://www.watchguard.com

#arp-guard, #hyper-v, #nvgre, #router-guard, #security, #statefull-firewalls, #watchguard, #windows-server-2012-r2