OpenStack is a great technology, but it can be a bit cumbersome to deploy and manage without the proper tools. One easy solution to address this issue is to deploy OpenStack services using pre-built Docker containers.
Kolla is a set of deployment tools for OpenStack, consisting in the Kolla project itself, for generating OpenStack Docker images, and “deliverables” projects, to deploy the Docker containers and thus OpenStack. The most mature deliverable is kolla-ansible, which, as the name implies, uses Ansible playbooks to automate the deployment. The project documentation can be found here.
Hyper-V setup
On the Windows host, we need a VM to host the Linux OpenStack controller. For this purpose I created an Ubuntu 16.04 VM with 8GB of RAM, 4 virtual cores and 20GB of disk. All the controller services run here and are deployed with Kolla in Docker containers. Last but not least, the same Hyper-V also serves as a compute host for the OpenStack deployment. This is achieved by installing our Cloudbase OpenStack components. Additional Hyper-V compute nodes can be added later as needed.
Networking setup
On the Hyper-V host, I am going to need 2 virtual switches that are going to be connected to the OpenStack controller VM. ext-net is the external network, it is bridged to the Windows physical external interface. I will use this network also for the management of the VM. data-net is the data network, which can be a simple private virtual switch for now (an external one is needed only when adding more compute nodes).
On the OpenStack Controller VM there are 3 interfaces. The first two, eth0 and eth1 are connected to the external network. The former is used for management (SSH, etc) and the latter is used by OpenStack for external traffic, managed by Open vSwitch. Finally, eth2 is the data/overlay network. It is used for tenant traffic between the instances and the Neutron components in the controller.
eth1 and eth2 do not have an IP and are set as “manual” in /etc/network/interfaces. The reason for this is that they are managed by OpenvSwitch. Also on these interfaces I need to enable MAC address spoofing (“Advanced Features” tab on the adapter).
The scripts that I will be using configures the Linux network interfaces automatically so I don’t need to bother with that now. The only interface I have already configured is eth0 so I can SSH into the machine.
OpenStack controller deployment
I am going to clone a repository that contains the scripts for the Kolla Openstack deployment, which can be found here. At the end of the deployment it will also create some common flavors, a Cirros VHDX Cinder image, a Neutron virtual router and 2 networks, one external (flat) and one private for tenants (VLAN based).
1 2 |
git clone https://github.com/cloudbase/kolla-resources.git cd kolla-resources |
To begin with, we are going to configure the management and external network details by setting some variables in deploy_openstack.sh:
1 |
vim deploy_openstack.sh |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
# deploy_openstack.sh MGMT_IP=192.168.0.60 MGMT_NETMASK=255.255.255.0 MGMT_GATEWAY=192.168.0.1 MGMT_DNS="8.8.8.8" # neutron external network information FIP_START=192.168.0.80 FIP_END=192.168.0.90 FIP_GATEWAY=192.168.0.1 FIP_CIDR=192.168.0.0/24 TENANT_NET_DNS="8.8.8.8" # used for HAProxy KOLLA_INTERNAL_VIP_ADDRESS=192.168.0.91 |
As you can see, I am using the same subnet for management and external floating IPs.
Now I can run the deployment script. I am using the Linux “time” command to see how long the deployment will take:
1 |
time sudo ./deploy_openstack.sh |
The first thing this script will do is to pull the Docker images for each OpenStack service. The great thing with Kolla is that you just need to create the images once, sparing significant time during deployment. This reduces significantly potential errors due to updated dependencies as the container images already contain all the required components. The images that I am going to use during the deployment are available here. Feel free to create your own, just follow the documentation.
After the deployment is finished, I have a fully functional OpenStack controller. It took around 13 minutes to deploy, that’s quite fast if you ask me.
1 2 3 |
real 12m28.716s user 3m7.296s sys 1m4.428s |
By running “sudo docker ps” I can see all the containers running.
Admin credentials can be sourced now:
1 |
source /etc/kolla/admin-openrc.sh |
The only thing left to do is to deploy the OpenStack Hyper-V components.
Nova Hyper-V compute node deployment
First, I’m going to edit the Ansible inventory to add my Hyper-V host (simply named “hyperv-host” in this post) as well as the credentials needed to access it:
1 |
vim hyperv_inventory |
1 2 3 4 5 6 7 8 9 10 |
[hyperv] hyperv-host [hyperv:vars] ansible_ssh_host=192.168.0.120 ansible_user=Administrator ansible_password=Passw0rd ansible_port=5986 ansible_connection=winrm ansible_winrm_server_cert_validation=ignore |
An HTTPS WinRM listener needs to be configured on the Hyper-V host, which can be easily created with this PowerShell script.
Now, I’m going to run the scripts that will fully deploy and configure Nova compute on Hyper-V. The first parameter is the data bridge that I configured earlier, data-net. The third and fourth parameters are are the Hyper-V credentials that FreeRDP will need to use in order to access the Hyper-V host when connecting to a Nova instance console.
1 |
sudo ./deploy_hyperv_compute_playbook.sh data-net Administrator Passw0rd |
Next, I need to set trunk mode for my OpenStack controller. There are two reasons for this: first, I have a tenant network with type VLAN, and second, the controller is a VM in Hyper-V, so the hypervisor needs to allow VLAN tagged packets on the controller VM data interface. Start an elevated PowerShell and run:
1 |
Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList 500-2000 -NativeVlanId 0 openstack-controller |
“openstack-controller” is the name of the controller VM in Hyper-V.
Spawning a VM
Now I have everything in place to start playing around. I will boot a VM and test its connectivity to the Internet.
1 2 3 4 |
NETID=`neutron net-show private-net | awk '{if (NR == 5) {print $4}}'` nova boot --flavor m1.nano \ --nic net-id=$NETID \ --image cirros-gen1-vhdx hyperv-vm1 |
Taking a look in Horizon:
The FreeRDP console access from Horizon works as well. I can also access the VM directly from Hyper-v if needed.
Useful information
What if you need to modify the configuration of an OpenStack service running in a container? For example, lets say you want to enable another ML2 type driver. It’s quite easy actually.
In this case i need to edit the ml2_conf.ini file:
1 |
sudo vim /etc/kolla/neutron-server/ml2_conf.ini |
After I am done with the editing, the only thing left to do is to restart the Neutron server container:
1 |
sudo docker restart neutron_server |
Done. As you can see, Kolla keeps all the OpenStack configuration files in /etc/kolla.
Conclusions
Containers can help a lot in an OpenStack deployment. Footprint is small, dependency induced regressions are limited and with the aid of a tool like Ansible, automation can be managed very easily.
What’s next? As I mentioned at the beginning, kolla-ansible is just one of the “deliverables”. kolla-kubernetes is also currently being developed and we can already see the benefits that the Kubernetes container orchestration can bring to OpenStack deployments, so looking forward for kolla-kubernetes to reach a stable status as well!
Easiest kolla install i have done so far, im surprised you are not using it for v-imagine, v-imagine install takes a hour instead of 20m that this took on my hyper-v host 🙂
You already got where this is heading 😉 We’re also very pleased with Kolla and considering that RDO gave us a lot of headache in v-magine, switching won’t take long!
Thank you for this and many other wonderful things in this site – this is the best for those having Windows development background. I have one question – if I need to enable cinder service – how do I do that?
Thanks again
Neither cinder-api or cinder-volume are not deployed in this scenario, but it can easily be done. Cinder-api can be deployed by enabling it from Kolla /etc/kolla/global.yml file. Cinder SMB driver can be manually installed and configured on the Windows host.
We do plan to implement the automated deployment in the near future. Thank you.
Thanks, looking forward to it – I enabled them and chose LVM (which needs iscsid and tgtd – not sure why – probably defaults to LVM2) , created cinder-volumes but many iscsid and tgtd related docker pull / play failed due to the corresponding binaries missing in cloudbaseit namespace. I just tried patching all those to point to kolla namespace and then the script went further and still failed later on the iscsid process (error dump below) – probably I need to configure Ubuntu before running this script to support iscsid and tgtd. It became too late in the night..
http://paste.ubuntu.com/24372998/
Indeed, Cinder services images are not yet present on our Docker Hub.
iscsid/tgtd are needed for iSCSI related operations and they work together with the LVM driver. Cinder service provisions logical volumes using the LVM driver and provides them to instances via iSCSI transport, more info here. Since you are on Ubuntu, it should have only deployed the tgtd container. Please check if it was indeed deployed.
For the future please try to paste logs/traces on something like paste.openstack.com or paste.ubuntu.com. Thank you.