Our involvement in the OpenStack community started with the inclusion of the Nova Hyper-V driver in the Folsom release and went on in just a few months with more Nova features, the Cinder Windows Storage driver, Cloud-Init for Windows and now Neutron.
Neutron Hyper-V Agent
Neutron is a very broad and modular project, encompassing layer 2 and 3 networking features on a wide range of technologies.
The initial release of the Hyper-V Neutron plugin offers the networking options listed below. Beside the listed options, support for Microsoft’s NVGRE virtual networking is planned to be released very soon as well.
VLAN
VLANs are the traditional option in network isolation configuration: a well tested, widely supported and well-known solution that provides excellent interoperability.
There are of course drawbacks. In particular the added configuration complexity and large number of addresses that need to be learned by switches and routers are among the reasons for the adoption of software defined networking solutions like Microsoft’s NVGRE or OpenVSwitch.
Flat networking
In this case the network consists of a single non-partitioned space. This is useful for testing and simple scenarios.
Local networking
Networking is limited to the Hyper-V host only. Useful for testing and simple scenarios where communication between VMs constrained to a single host is enough.
Components
There are currently three main Hyper-V related components required for configuring networking in OpenStack.
Neutron ML2 (Modular Layer 2) Plugin
The Modular Layer 2 (ml2) plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing OpenVSwitch, linuxbridge, and Hyper-V L2 agents. The plugin takes care of the centralized network configuration, including networks, subnets and ports. The plugin communicates via RPC APIs with a specific agent running as a service on every Hyper-V node.
Neutron Hyper-V plugin
Introduced in Grizzly, it has since been replaced by Neutron ML2 Plugin and it is no longer available, starting with the Kilo release.
Neutron Hyper-V Agent
The agent takes care of configuring Hyper-V networking and “plugs” the VM virtual network adapters (NICs) in the required virtual switches. Each VM can have multiple NICs connected to different networks managed by Neutron. For example a given VM could have a NIC connected to a network with VLAN ID 1000, another on a network with VLAN ID 1001 and third one connected to a local network.
Nova Compute Neutron Vif plugin
The Nova Hyper-V driver supports different networking infrastructures, currently Nova networking and Neutron, which means that an independent plugin is required to instruct Nova about how to handle networking. The Neutron Vif plugin for Hyper-V is part of the Nova project itself and can be selected in the configuration files, as shown in the following examples.
Routing and DHCP (layer 3 networking)
One of our requirements was to maximize networking interoperability with other compute nodes (KVM, Xen, etc) and networking layers. For this reason Layer 3 networking is handled on Linux, using the existing Neutron agents for DHCP lease management and networking. A typical deployment uses a dedicated server for networking, separated from the controller.
Interoperability
The Neutron Hyper-V Agent is designed to be fully compatible with the OpenVSwitch plugin for VLAN / Flat / Local networking.
This means that it’s possible to add an Hyper-V Compute node configured with the Neutron Hyper-V Agent to an OpenStack infrastructure configured with the Neutron OVS plugin without any change on the Neutron server configuration.
Configuration
Example setup
This a basic setup suitable for testing and simple scenarios, based on one Ubuntu Server 12.04 LTS x64 controller node running the majority of the OpenStack services and a separate Nova compute node running Hyper-V Server 2012.
Hyper-V Server is free and can be downloaded from here.
As an alternative you can install Windows Server 2012 and enable the Hyper-V role if you need GUI access.
Controller
Setting up a complete OpenStack environment goes beyond the scope of this document, but if you want to quickly setup a test environment (not a production one!) DevStack is a very good choice.
Here’s an example localrc
file for DevStack, more info available here:
1 2 3 4 5 6 7 8 9 10 |
DATABASE_PASSWORD=Passw0rd RABBIT_PASSWORD=Passw0rd SERVICE_TOKEN=Passw0rd SERVICE_PASSWORD=Passw0rd ADMIN_PASSWORD=Passw0rd disable_service n-net enable_service q-svc enable_service neutron enable_service q-dhcp enable_service q-l3 |
ML2 Plugin Configuration
Open the file /etc/neutron/neutron.conf
and look for the [DEFAULT]
section and select the ML2 plugin:
1 |
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin |
Open the file /etc/neutron/plugins/ml2/ml2_conf.ini
, look for the [ml2]
section and set the proper configuration, e.g.:
1 2 3 4 5 6 |
[ml2] # add any network_types you desire, separated with a comma (e.g.: flat,vlan) tenant_network_types = vlan type_drivers = local,flat,vlan,gre,vxlan # add the mechanisms for any other plugins / agents you might use. mechanism_drivers = openvswitch,hyperv |
In the [ml2_type_vlan]
section set:
1 |
network_vlan_ranges = physnet1:1000:2999 |
(Kilo release or newer). Hyper-V Mechanism Driver now exists in the networking_hyperv
third party library. In order to use this mechanism driver, you must install the library:
1 |
pip install -U networking_hyperv |
You can now start the Neutron server from your Neutron repository with:
1 |
./bin/neutron-server –config-file=/etc/neutron/neutron.conf –config-file=/etc/neutron/plugins/ml2/ml2_conf.ini |
Hyper-V Compute Node
On this server we need to install Nova compute and the Neutron Hyper-V agent. You can either install each component manually or use our free installer. The Kilo beta version is built automatically every night and contains the latest Nova sources pulled from the repository. This is the preferred way to install OpenStack components on Windows.
The installer has been written to simplify to the maximum degree the Nova Compute and Neutron Hyper-V Agent deployment process. A detailed step by step guide is also available.
In case you should prefer to install each component manually you can refer to the documentation available here.
Note (Kilo release or newer): The Hyper-V Neutron Agent has been decomposed from the main neutron repository and moved to the networking_hyperv
library. In order to use the agent, the library must be installed:
1 |
pip install -U networking_hyperv |
Hyper-V Agent Configuration
Note: If you installed Neutron with the installer, the following configuration is automatically generated and doesn’t need to be generated manually. The Neutron agent in this case is executed as a Windows service called neutron-hyperv-agent
started automatically at system boot.
To manually generate the configuration, create the agent configuration file:
1 |
notepad c:openstacketchyperv_neutron_agent.conf |
Add an [AGENT] section with:
1 2 3 |
polling_interval = 2 physical_network_vswitch_mappings = *:external local_network_vswitch = private |
Where “external” and “private” are virtual switches.
If you don’t have an external virtual switch configured you can create one now in Powershell with:
1 |
New-VMSwitch external -NetAdapterName $adapterName -AllowManagementOS $True |
The AllowManagementOS
parameter is not necessary if you don’t need to access the host for management purposes, just make sure to have at least another network adapter for that!
Here’s how to get the adapter name from the list of network adapters on your host:
1 |
Get-NetAdapter |
Neutron local networking requires a private virtual switch, where communication is allowed between VMs on the host only. A private virtual switch can be created with the following Powershell command:
1 |
New-VMSwitch private -SwitchType Private |
You can now start the agent with:
1 |
neutron-hyperv-agent –config-file=c:openstacketchyperv_neutron_agent.conf |
You will also need to update the Nova compute configuration file to use Neutron networking. Locate and open your nova.conf
file and add / edit the following lines setting controller_address
according to your configuration:
1 2 3 4 5 6 7 8 9 |
network_api_class=nova.network.neutronv2.api.API [neutron] url=http://controller_address:9696 auth_strategy=keystone admin_tenant_name=service admin_username=neutron admin_password=Passw0rd admin_auth_url=http://controller_address:35357/v2.0 |
Restart the Nova compute service afterwards.
Example
Our example consists of two networks, net1 and net2 with different VLAN ids and one subnet per network.
On the controller node execute the following commands:
1 2 3 4 |
neutron net-create net1 neutron subnet-create net1 10.0.1.0/24 neutron net-create net2 neutron subnet-create net1 10.0.2.0/24 |
At this point we can already deploy an instance with the following Nova command, including two NICs, one for each network. The ids of the networks can be easily obtained with a simple shell script:
1 2 |
NETID1=`neutron net-show net1 | awk '{if (NR == 5) {print $4}}'` NETID2=`neutron net-show net2 | awk '{if (NR == 5) {print $4}}'` |
Ok, it’s time to finally boot a VM:
1 |
nova boot --flavor 1 --image "Ubuntu Server 12.04" --key-name key1 --nic net-id=$NETID1 --nic net-id=$NETID2 vm1 |
Note: The above script is expecting to find a glance image called “Ubuntu Server 12.04” and a Nova keypair called “key1”.
Once the VM deployment ends (this can be verified with nova list
) we will find a running VM with the expected networking configuration. We can also verify the Neutron port allocations with:
1 |
neutron port-list |
Resulting in an output similar to:
1 2 3 4 5 6 |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ | 86bc8fcd-1402-4cfb-8d1a-67e41b7d5667 | | fa:16:3e:e1:82:80 | {"subnet_id": "b437a503-eaef-4a5f-b3e5-1ba3e4101a08", "ip_address": "10.0.2.2"} | | 8a3d8f98-1e83-40ce-84f7-47e30b49d6ed | | fa:16:3e:17:b1:92 | {"subnet_id": "81b3ff53-0d4d-47c8-853e-5229b95ffd8d", "ip_address": "10.0.1.2"} | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ |
Here’s a snapshot showing how the VLAN settings are configured on the virtual machine in Hyper-V Virtual Machine Manager
:
Troubleshooting
If you are here, it means that the OpenStack instance created does not have any network connectivity. Next, we will try to determine the cause of the issue.
Neutron Controller
Before the instance is able to receive an IP, the port associated to the instance must be bound. To check this, run:
1 2 3 |
INSTANCE_IP = `nova show test | grep network | awk '{ print $5 }'` PORT_ID=`neutron port-list | grep $INSTANCE_IP | awk '{ print $2 }'` neutron port-show $PORT_ID |
The result must have the following fields:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
+-----------------------+-------------------------------------------------------+ | Field | Value | +-----------------------+-------------------------------------------------------+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | WIN-SRVR2 | | binding:profile | {} | | binding:vif_details | {"port_filter": false} | | binding:vif_type | hyperv | | binding:vnic_type | normal | | device_id | 800f81f0-0882-45de-8ceb-526b160b29df | | device_owner | compute:None | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "SUBNET_ID", "ip_address": "10.0.0.22"} | | id | PORT_ID | | mac_address | fa:16:3e:51:a8:d0 | | status | ACTIVE | +-----------------------+-------------------------------------------------------+ |
If the output is similar, then the port was properly created and bound. The issue can be either with the DHCP configuration or on the Hyper-V Neutron Agent side. If instead the field binding:vif_type
has the value binding_failed
, it means that the port was not properly bound and the following items must be verified.
Make sure that the Neutron agents are alive. At least DHCP agent
, HyperV agent
, Open vSwitch agent
and the Metadata agent
should be alive:
1 |
neutron agent-list |
If an agent is alive, the output should be:
1 2 3 4 5 6 7 8 |
+--------------+--------------------+-----------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------+--------------------+-----------+-------+----------------+---------------------------+ | 72e9a2c6-... | DHCP agent | ubuntu | :-) | True | neutron-dhcp-agent | | 7bc647fb-... | HyperV agent | WIN-SRVR2 | :-) | True | neutron-hyperv-agent | | b98cf3d7-... | Metadata agent | ubuntu | :-) | True | neutron-metadata-agent | | e9ad0a23-... | Open vSwitch agent | ubuntu | :-) | True | neutron-openvswitch-agent | +--------------+--------------------+-----------+-------+----------------+---------------------------+ |
If there is XXX
instead of :-)
it means that the agent is dead or is not properly reporting its state. If the HyperV agent
is dead, check the logs on your Hyper-V compute node:
1 |
notepad C:OpenStackLogneutron-hyperv-agent.log |
Check the network in which the instance was created:
1 2 3 |
# get the network name on which the instance was created nova show $INSTANCE_NAME neutron net-show $NETWORK_NAME |
The field provider:network_type
must be one of those values: vlan, flat, local
, as those are the ones supported by Hyper-V. If it is not, create a new instance using another network that is compatible with Hyper-V.
Check that the /etc/neutron/plugins/ml2/ml2_conf.ini
file contains hyperv as a mechanism. Check the ML2 Plugin Configuration
section for more details.
Check if the subnet where the instance was created is DHCP enabled.
1 |
neutron subnet-show $SUBNET_ID |
The output should be similar to this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
+-------------------+--------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------+ | allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} | | cidr | 10.0.0.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 10.0.0.1 | | host_routes | | | id | 849bfbce-10de-41e9-9b18-f6dbe8408281 | | ip_version | 4 | | name | private-subnet | | network_id | f8fb689f-0df3-4f7c-ab12-fd2072593598 | | tenant_id | 131f7749a6e94db6b944b396687add8b | +-------------------+--------------------------------------------+ |
Hyper-V Compute Node
If the port is bound, according to neutron, then the issue might be on the Hyper-V Node’s side. First of all, you should check that the NIC has been connected corectly on Hyper-V. On the Hyper-V compute node, open a powershell and execute:
1 |
Get-VMNetworkAdapter -VMNetworkAdapterName $PORT_ID -All |
Secondly, if the instance was created on a network with the network_type
set to vlan
, you should check that the VLAN was properly set:
1 |
Get-VMNetworkAdapterVlan -VMNetworkAdapterName $PORT_ID -ErrorAction Ignore |
where $PORT_ID is the same port_id shown in neutron. It should display something like this:
1 2 3 |
VName VMNetworkAdapterName Mode VlanList ------ -------------------- ---- -------- instance-00000089 $PORT_ID Access VLAN_ID |
If the output is different from what you expect, the Hyper-V Neutron Agent’s logs will be useful to determine the issue:
1 |
notepad C:OpenStackLogneutron-hyperv-agent.log |
If the results are correct, then there can be a few reasons for why the instances do not have any connectivity:
– Check that the DCHP agent is alive.
– Check that the neutron subnet is DHCP enabled.
– Check that the Hyper-V VSwitch is external and properly configured (see the Hyper-V Agent Configuration
section).
– Make sure that the Hyper-V VSwitch is connected to the same network as the neutron controller (in typical deployments, eth1
)
Some Windows NIC drivers disable VLAN access by default!
Check the following registry key:
1 |
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlClass{4d36e972-e325-11ce-bfc1-08002be10318} |
Look in all the child keys xxxx (e.g. 0001, 0002) for a value named “VLanFiltering” and if present make sure that is set to 0.
In case of changes, reboot the server or restart the corresponding adapters.