We are happy to announce the availability of Open vSwitch 2.5 (OVS) for Microsoft Hyper-V Server 2012, 2012 R2 and 2016 (technical preview) thanks to the joint effort of Cloudbase Solutions, VMware and the rest of the Open vSwitch community.
The OVS 2.5 release includes the Open vSwitch CLI tools and services (e.g. ovsdb-server, ovs-vswitchd, ovs-vsctl, ovs-ofctl, etc.), and an updated version of the OVS Hyper-V virtual switch forwarding extension, providing fully interoperable GRE, VXLAN and STT encapsulation between Hyper-V and Linux, including KVM based virtual machines.
As usual, we also released an MSI installer that takes care of the Windows services for ovsdb-server and ovs-vswitchd daemons along with all the required binaries and configurations.
All the Open vSwitch code is available as open source here:
https://github.com/openvswitch/ovs/tree/branch-2.5
https://github.com/cloudbase/ovs/tree/branch-2.5-cloudbase
Supported Windows operating systems:
- Windows Server and Hyper-V Server 2012 and 2012 R2.
- Windows Server and Hyper-V Server 2016 (technical preview).
- Windows 8, 8.1 and 10.
Installing Open vSwitch on Hyper-V
The entire installation process is seamless. Download our installer and run it. You will be welcomed by the following screen:
Click “Next”, accept the license, click “Next” again and you’ll have the option to install both the Hyper-V virtual switch extension driver and the command line tools. If you want to install only the command line tools (in order to be able to connect to a Linux or Windows server), just deselect the driver option.
Click “Next” followed by “Install” and the installation will start. You will have to confirm that you want to install the signed kernel driver and the process will be completed in a matter of a few seconds, generating an Open vSwitch database and starting the ovsdb-server and ovs-vswitchd services.
The installer also adds the command line tools folder to the system path, available after the next logon or CLI shell execution.
Unattended installation
Fully unattended installation is also available (if you already have accepted/imported our certificate). This helps to install Open vSwitch with Windows GPOs, Puppet, Chef, SaltStack, DSC or any other automated deployment solution:
1 |
msiexec /i openvswitch-hyperv-2.5.0.msi /l*v log.txt |
Configuring Open vSwitch on Windows
Let us assume that we have the following environment: a host with four Ethernet cards in which we shall bind a Hyper-V Virtual Switch on top of one of them.
The list of adapters:
1 2 3 4 5 6 7 8 |
PS C:\package> Get-NetAdapter Name InterfaceDescription ifIndex Status MacAddress LinkSpeed ---- -------------------- ------- ------ ---------- --------- port3 Intel(R) 82574L Gigabit Network Co...#3 26 Up 00-0C-29-40-8B-EA 1 Gbps nat Intel(R) 82574L Gigabit Network Co...#4 27 Up 00-0C-29-40-8B-E0 1 Gbps port2 Intel(R) 82574L Gigabit Network Co...#2 18 Up 00-0C-29-40-8B-D6 1 Gbps port1 Intel(R) 82574L Gigabit Network Conn... 17 Up 00-0C-29-40-8B-CC 1 Gbps |
Create a Hyper-V external virtual switch with the AllowManagementOS flag set to false.
For example:
1 2 3 4 5 |
PS C:\package> New-VMSwitch -Name vSwitch -NetAdapterName port1 -AllowManagementOS $false Name SwitchType NetAdapterInterfaceDescription ---- ---------- ------------------------------ vSwitch External Intel(R) 82574L Gigabit Network Connection |
To verify that the extension has been installed on our system:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
PS C:\package> Get-VMSwitchExtension -VMSwitchName vSwitch -Name "Cloudbase Open vSwitch Extension" Id : 583CC151-73EC-4A6A-8B47-578297AD7623 Name : Cloudbase Open vSwitch Extension Vendor : Cloudbase Solutions SRL Version : 13.43.16.16 ExtensionType : Forwarding ParentExtensionId : ParentExtensionName : SwitchId : 5844f4dd-b3d7-496c-81cb-481a64fa7f58 SwitchName : vSwitch Enabled : False Running : False ComputerName : HYPERV_NORMAL_1 Key : IsDeleted : False |
We can now enable the OVS extension on the vSwitch virtual switch:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
PS C:\package> Enable-VMSwitchExtension -VMSwitchName vSwitch -Name "Cloudbase Open vSwitch Extension" Id : 583CC151-73EC-4A6A-8B47-578297AD7623 Name : Cloudbase Open vSwitch Extension Vendor : Cloudbase Solutions SRL Version : 13.43.16.16 ExtensionType : Forwarding ParentExtensionId : ParentExtensionName : SwitchId : 5844f4dd-b3d7-496c-81cb-481a64fa7f58 SwitchName : vSwitch Enabled : True Running : True ComputerName : HYPERV_NORMAL_1 Key : IsDeleted : False |
Please note that when you enable the extension, the virtual switch will stop forwarding traffic until it is configured (adding the Ethernet adapter under a bridge).
i.e.
1 2 |
PS C:\package> ovs-vsctl.exe add-br br-port1 PS C:\package> ovs-vsctl.exe add-port br-port1 port1 |
Let us talk in more detail about the two commands issued above.
The first command:
1 |
PS C:\package> ovs-vsctl.exe add-br br-port1 |
will add a new adapter on the host, which is disabled by default:
1 2 3 4 5 6 7 8 9 |
PS C:\package> Get-NetAdapter Name InterfaceDescription ifIndex Status MacAddress LinkSpeed ---- -------------------- ------- ------ ---------- --------- br-port1 Hyper-V Virtual Ethernet Adapter #2 47 Disabled 00-15-5D-00-62-79 10 Gbps port3 Intel(R) 82574L Gigabit Network Co...#3 26 Up 00-0C-29-40-8B-EA 1 Gbps nat Intel(R) 82574L Gigabit Network Co...#4 27 Up 00-0C-29-40-8B-E0 1 Gbps port2 Intel(R) 82574L Gigabit Network Co...#2 18 Up 00-0C-29-40-8B-D6 1 Gbps port1 Intel(R) 82574L Gigabit Network Conn... 17 Up 00-0C-29-40-8B-CC 1 Gbps |
This adapter can be used as an IP-able device:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
PS C:\package> Enable-NetAdapter br-port1 PS C:\package> New-NetIPAddress -IPAddress 14.14.14.2 -InterfaceAlias br-port1 -PrefixLength 24 IPAddress : 14.14.14.2 InterfaceIndex : 47 InterfaceAlias : br-port1 AddressFamily : IPv4 Type : Unicast PrefixLength : 24 PrefixOrigin : Manual SuffixOrigin : Manual AddressState : Tentative ValidLifetime : Infinite ([TimeSpan]::MaxValue) PreferredLifetime : Infinite ([TimeSpan]::MaxValue) SkipAsSource : False PolicyStore : ActiveStore IPAddress : 14.14.14.2 InterfaceIndex : 47 InterfaceAlias : br-port1 AddressFamily : IPv4 Type : Unicast PrefixLength : 24 PrefixOrigin : Manual SuffixOrigin : Manual AddressState : Invalid ValidLifetime : Infinite ([TimeSpan]::MaxValue) PreferredLifetime : Infinite ([TimeSpan]::MaxValue) SkipAsSource : False PolicyStore : PersistentStore |
The second command:
1 |
PS C:\package> ovs-vsctl.exe add-port br-port1 port1 |
will allow the bridge to use the actual physical NIC on which the Hyper-V vSwitch was created (port1).
Users from Linux are familiar with the setup above because it is similar to a linux bridge.
Limitations
- We currently support a single Hyper-V virtual switch in our forwarding extension.
- Multiple host nics with LACP support is experimental in this release.
OpenStack Integration with Open vSwitch on Windows
OpenStack is a very common use case for Open vSwitch on Hyper-V. The following example is based on a DevStack Mitaka All-in-One deployment on Ubuntu 14.04 LTS with a Hyper-V compute node, but the concepts and the following steps apply to any OpenStack deployment.
Let us install our DevStack node. Here is a sample local.conf configuration:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
ubuntu@ubuntu:~/devstack$ cat local.conf [[local|localrc]] # Set this to your management IP HOST_IP=14.14.14.1 FORCE=yes #Services to be started disable_service n-net enable_service rabbit mysql enable_service key enable_service n-api n-crt n-obj n-cond n-sch n-cauth n-cpu enable_service neutron q-svc q-agt q-dhcp q-l3 q-meta q-fwaas q-lbaas enable_service horizon enable_service g-api g-reg disable_service heat h-api h-api-cfn h-api-cw h-eng disable_service cinder c-api c-vol c-sch disable_service tempest ENABLE_TENANT_TUNNELS=False Q_PLUGIN=ml2 Q_ML2_TENANT_NETWORK_TYPE=vxlan Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,hyperv OVS_ENABLE_TUNNELING=True LIBVIRT_TYPE=kvm API_RATE_LIMIT=False DATABASE_PASSWORD=Passw0rd RABBIT_PASSWORD=Passw0rd SERVICE_TOKEN=Passw0rd SERVICE_PASSWORD=Passw0rd ADMIN_PASSWORD=Passw0rd SCREEN_LOGDIR=$DEST/logs/screen LOGFILE=$DEST/logs/stack.sh.log VERBOSE=True LOGDAYS=2 RECLONE=no KEYSTONE_BRANCH=stable/mitaka NOVA_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka SWIFT_BRANCH=stable/mitaka GLANCE_BRANCH=stable/mitaka CINDER_BRANCH=stable/mitaka HEAT_BRANCH=stable/mitaka TROVE_BRANCH=stable/mitaka HORIZON_BRANCH=stable/mitaka [[post-config|$NEUTRON_CONF]] [database] min_pool_size = 5 max_pool_size = 50 max_overflow = 50 |
Networking:
1 2 3 4 5 6 7 8 9 |
ubuntu@ubuntu:~/devstack$ ifconfig eth3 eth3 Link encap:Ethernet HWaddr 00:0c:29:25:db:8c inet addr:14.14.14.1 Bcast:14.14.14.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe25:db8c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2209 errors:0 dropped:0 overruns:0 frame:0 TX packets:1007 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:336185 (336.1 KB) TX bytes:153402 (153.4 KB) |
After DevStack finishes installing we can add some Hyper-V VHD or VHDX images to Glance, for example our Windows Server 2012 R2 evaluation image. Additionally, since we are using VXLAN, the default guest MTU should be set to 1450. This can be done via DHCP option if the guest supports it, as described here.
Now let us move to the Hyper-V node. First we have to download the latest OpenStack compute installer:
1 |
PS C:\package> Start-BitsTransfer https://cloudbase.it/downloads/HyperVNovaCompute_Mitaka_13_0_0.msi |
Full steps on how to install and configure OpenStack on Hyper-V are available here: OpenStack on Windows installation.
In our example, the Hyper-V node will use the following adapter to connect to the OpenStack environment:
1 2 3 4 5 6 7 |
Ethernet adapter br-port1: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::9c1a:f185:bb09:62e2%47 IPv4 Address. . . . . . . . . . . : 14.14.14.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : |
This is the internal adapter bound to the vSwitch virtual switch, as created during the previous steps (ovs-vsctl add-br br-port1).
We can now verify our deployment by taking a look at the Nova services and Neutron agents status in the OpenStack controller and ensuring that they are up and running:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
ubuntu@ubuntu:~/devstack$ nova service-list +----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+ | 5 | nova-conductor | ubuntu | internal | enabled | up | 2016-04-26T20:09:44.000000 | - | | 6 | nova-cert | ubuntu | internal | enabled | up | 2016-04-26T20:09:39.000000 | - | | 7 | nova-scheduler | ubuntu | internal | enabled | up | 2016-04-26T20:09:45.000000 | - | | 8 | nova-consoleauth | ubuntu | internal | enabled | up | 2016-04-26T20:09:46.000000 | - | | 9 | nova-compute | ubuntu | nova | enabled | up | 2016-04-26T20:09:48.000000 | - | | 10 | nova-compute | hyperv_normal_1 | nova | enabled | up | 2016-04-26T20:09:39.000000 | - | +----+------------------+-----------------+----------+---------+-------+----------------------------+-----------------+ ubuntu@ubuntu:~/devstack$ neutron agent-list +--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+ | 1bb8eccc-ad8c-43c2-a54e-d84c6cd7acd4 | DHCP agent | ubuntu | nova | :-) | True | neutron-dhcp-agent | | 3d89e79d-3cb4-4a10-ae01-773b86f83fb2 | Loadbalancer agent | ubuntu | | :-) | True | neutron-lbaas-agent | | 7777a901-0c58-4180-8d01-4ea3296621a4 | Open vSwitch agent | ubuntu | | :-) | True | neutron-openvswitch-agent | | 93d6390a-19d2-4c79-8f76-90736bc47f5f | HyperV agent | hyperv_normal_1 | | :-) | True | neutron-hyperv-agent | | c3af1d4b-5bba-47b0-b0db-b3c0d49bb41a | Metadata agent | ubuntu | | :-) | True | neutron-metadata-agent | | ec9bc28c-a5ee-4733-8b9c-3a1f99c42f08 | L3 agent | ubuntu | nova | :-) | True | neutron-l3-agent | +--------------------------------------+--------------------+-----------------+-------------------+-------+----------------+---------------------------+ |
Next we can disable the Windows Hyper-V agent, which is not needed since we use neutron Open vSwitch agent.
From a command prompt (cmd.exe), issue the following commands:
1 2 3 4 5 6 7 8 9 10 11 12 |
C:\package>sc config "neutron-hyperv-agent" start=disabled [SC] ChangeServiceConfig SUCCESS C:\package>sc stop "neutron-hyperv-agent" SERVICE_NAME: neutron-hyperv-agent TYPE : 10 WIN32_OWN_PROCESS STATE : 1 STOPPED WIN32_EXIT_CODE : 0 (0x0) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0 |
We need to create a new service called neutron-ovs-agent and put its configuration options in C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf. From a command prompt:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
C:\Users\Administrator>sc create neutron-ovs-agent binPath= "\"C:\Program Files\Cloudbase Solutions\OpenStack\Nova\bin\OpenStackServiceNeutron.exe\" neutron-hyperv-agent \"C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python27\Scripts\neutron-openvswitch-agent.exe\" --config-file \"C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf\"" type= own start= auto error= ignore depend= Winmgmt displayname= "OpenStack Neutron Open vSwitch Agent Service" obj= LocalSystem [SC] CreateService SUCCESS C:\Users\Administrator>notepad "c:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\neutron_ovs_agent.conf" C:\Users\Administrator>sc start neutron-ovs-agent SERVICE_NAME: neutron-ovs-agent TYPE : 10 WIN32_OWN_PROCESS STATE : 2 START_PENDING (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN) WIN32_EXIT_CODE : 0 (0x0) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x1 WAIT_HINT : 0x0 PID : 2740 FLAGS : |
Note: creating a service manually for the OVS agent won’t be necessary anymore starting with the next Nova Hyper-V MSI installer version.
Here is the content of the neutron_ovs_agent.conf file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
[DEFAULT] verbose=true debug=false control_exchange=neutron policy_file=C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\policy.json rpc_backend=neutron.openstack.common.rpc.impl_kombu rabbit_host=14.14.14.1 rabbit_port=5672 rabbit_userid=stackrabbit rabbit_password=Passw0rd logdir=C:\OpenStack\Log\ logfile=neutron-ovs-agent.log [agent] tunnel_types = vxlan enable_metrics_collection=false [SECURITYGROUP] enable_security_group=false [ovs] local_ip = 14.14.14.2 tunnel_bridge = br-tun integration_bridge = br-int tenant_network_type = vxlan enable_tunneling = true |
Now if we run ovs-vsctl show, we can see a VXLAN tunnel in place:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
PS C:\> ovs-vsctl.exe show a81a54fc-0a3c-4152-9a0d-f3cbf4abc3ca Bridge br-int fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-0e0e0e01" Interface "vxlan-0e0e0e01" type: vxlan options: {df_default="true", in_key=flow, local_ip="14.14.14.2", out_key=flow, remote_ip="14.14.14.1"} Bridge "br-port1" Port "port1" Interface "port1" Port "br-port1" Interface "br-port1" type: internal |
After spawning a Nova instance on the Hyper-V node you should see:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
PS C:\> get-vm Name State CPUUsage(%) MemoryAssigned(M) Uptime Status ---- ----- ----------- ----------------- ------ ------ instance-00000003 Running 0 512 00:01:09 Operating normally PS C:\Users\Administrator> Get-VMConsole instance-00000003 PS C:\> ovs-vsctl.exe show a81a54fc-0a3c-4152-9a0d-f3cbf4abc3ca Bridge br-int fail_mode: secure Port "f44f4971-4a75-4ba8-9df7-2e316f799155" tag: 1 Interface "f44f4971-4a75-4ba8-9df7-2e316f799155" Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "vxlan-0e0e0e01" Interface "vxlan-0e0e0e01" type: vxlan options: {df_default="true", in_key=flow, local_ip="14.14.14.2", out_key=flow, remote_ip="14.14.14.1"} Bridge "br-port1" Port "port1" Interface "port1" Port "br-port1" Interface "br-port1" type: internal |
In this example, “f44f4971-4a75-4ba8-9df7-2e316f799155” is the OVS port name associated to the instance-00000003 VM vnic. You can find out the details by running the following PowerShell cmdlet:
1 2 3 4 |
PS C:\Users\Administrator> Get-VMByOVSPort -OVSPortName "f44f4971-4a75-4ba8-9df7-2e316f799155" ... ElementName : instance-00000003 ... |
The VM instance-00000003 got an IP address from the neutron DHCP agent, with fully functional networking between KVM and Hyper-V hosted virtual machines!
This is everything you need to get started with OpenStack, Hyper-V and OVS.
In the next blog post we will show you how to manage Hyper-V on OVS without OpenStack using a VXLAN tunnel.