OVS VXLAN setup on Hyper-V without OpenStack
In the previous post we explained how to deploy Open vSwitch (OVS) on Hyper-V and integrate it in an OpenStack context. In this second part we’ll explain how to configure manually a VXLAN tunnel between VMs running on Hyper-V hosts and VMs running on KVM hosts.
KVM OVS configuration
In this example, KVM1 provides a VXLAN tunnel with local endpoint 10.13.10.30:
- vxlan-0a0d0a23 connected to Hyper-V (10.13.10.35)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
ubuntu@ubuntu:~$ sudo ovs-vsctl show c387faab-80cc-493f-ac78-1c8de0fe51ad Bridge br-int fail_mode: secure Port "qr-136f09f9-fb" tag: 1 Interface "qr-136f09f9-fb" type: internal Port br-int Interface br-int type: internal Port "tapc17fbf14-28" tag: 1 Interface "tapc17fbf14-28" type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-tun fail_mode: secure Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-0a0d0a23" Interface "vxlan-0a0d0a23" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.13.10.30", out_key=flow, remote_ip="10.13.10.35"} Port br-tun Interface br-tun type: internal |
Hyper-V OVS configuration
Let’s start by creating a VXLAN tunnel, our sample IP address assigned to the “vEthernet (external)” adapter is 10.13.10.35:
1 2 3 4 5 6 7 |
ovs-vsctl.exe add-port br0 vxlan-1 ovs-vsctl: Error detected while setting up 'vxlan-1'. See ovs-vswitchd log for details. ovs-vsctl.exe set Interface vxlan-1 type=vxlan ovs-vsctl.exe set Interface vxlan-1 options:local_ip=10.13.10.35 ovs-vsctl.exe set Interface vxlan-1 options:remote_ip=10.13.10.30 ovs-vsctl.exe set Interface vxlan-1 options:in_key=flow ovs-vsctl.exe set Interface vxlan-1 options:out_key=flow |
Note: the error can be ignored, we are implementing a new event based mechanism and this error will disappear.
As you can see, all the commands are very familiar if you are used to OVS on Linux.
As introduced before, the main area where the Hyper-V implementation differs from its Linux counterpart is in how virtual machines are attached to a given OVS port. This is easily accomplished by using the Set-VMNetworkAdapterOVSPort PowerShell cmdlet provided with the installer (please refer to part1 for details on installing OVS).
Let’s say that we have a Hyper-V virtual machine called VM2 and that we want to connect it to the Hyper-V OVS switch. All we have to do for each VM network adapter is to connect it to the external switch as you would normally do, assign it to a given OVS port and create the corresponding ports in OVS:
1 2 3 4 |
$vnic = Get-VMNetworkAdapter instance-00000005 Connect-VMNetworkAdapter -VMNetworkAdapter $vnic -SwitchName external $vnic | Set-VMNetworkAdapterOVSPort -OVSPortName vm2 ovs-vsctl.exe add-port br0 vm2 tag=1 |
Here’s how the resulting OVS configuration looks like on Hyper-V:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
PS C:\Users\Administrator> ovs-vsctl.exe show 01ee44a6-9fac-461a-a8c1-da77a09fae69 Bridge br-int fail_mode: secure Port "adb134bf-5312-4323-b574-d206c3cef740" tag: 1 Interface "adb134bf-5312-4323-b574-d206c3cef740" Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "vm2" tag: 1 Interface "vm2" Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port "vxlan-1" Interface "vxlan-1" type: vxlan options: {in_key=flow, local_ip="10.13.10.35", out_key=flow, remote_ip="10.13.10.30"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port internal Interface internal Port "external.1" Interface "external.1" |
Further control can be accomplished by applying flow rules, for example by configuring port / virtual machine networking access on each VXLAN tunnel.
Here are i.e. the flows on br-tun that can be used to enable communication using the VLAN tag “1”:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
PS C:\Users\Administrator> ovs-ofctl.exe dump-flows br-tun NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1457.706s, table=0, n_packets=5710, n_bytes=1253743, idle_age=0, priority=1,in_port=3 actions=output:2 cookie=0x0, duration=1457.687s, table=0, n_packets=5909, n_bytes=1215935, idle_age=0, priority=1,in_port=2 actions=output:3 cookie=0x0, duration=1457.651s, table=0, n_packets=1393, n_bytes=129330, idle_age=0, priority=1,in_port=1 actions=resubmit(,2) cookie=0x0, duration=1451.280s, table=0, n_packets=1332, n_bytes=126624, idle_age=1, priority=1,in_port=8 actions=resubmit(,4) cookie=0x0, duration=1457.634s, table=0, n_packets=0, n_bytes=0, idle_age=1457, priority=0 actions=drop cookie=0x0, duration=1457.609s, table=2, n_packets=1327, n_bytes=125768, idle_age=0, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20) cookie=0x0, duration=1457.595s, table=2, n_packets=66, n_bytes=3562, idle_age=1187, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22) cookie=0x0, duration=1457.557s, table=3, n_packets=0, n_bytes=0, idle_age=1457, priority=0 actions=drop cookie=0x0, duration=1275.744s, table=4, n_packets=1332, n_bytes=126624, idle_age=1, priority=1,tun_id=0x410 actions=mod_vlan_vid:1,resubmit(,10) cookie=0x0, duration=1457.540s, table=4, n_packets=0, n_bytes=0, idle_age=1457, priority=0 actions=drop cookie=0x0, duration=1457.513s, table=10, n_packets=1332, n_bytes=126624, idle_age=1, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 cookie=0x0, duration=1248.963s, table=20, n_packets=1321, n_bytes=125258, hard_timeout=300, idle_age=0, hard_age=0, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:86:b7:98 actions=load:0->NXM_OF_VLAN_TCI[],load:0x410->NXM_NX_TUN_ID[],output:8 cookie=0x0, duration=1457.497s, table=20, n_packets=0, n_bytes=0, idle_age=1457, priority=0 actions=resubmit(,22) cookie=0x0, duration=1275.771s, table=22, n_packets=12, n_bytes=1294, idle_age=1187, dl_vlan=1 actions=strip_vlan,set_tunnel:0x410,output:8 cookie=0x0, duration=1457.455s, table=22, n_packets=54, n_bytes=2268, idle_age=1280, priority=0 actions=drop |
OVS based networking is now fully functional between KVM and Hyper-V hosted virtual machines!