In part 2 of this series about OpenStack on ARM64, we got to the point where our cloud is fully deployed with all the Compute (VMs), Software Defined Networking (SDN) and Software Defined Storage (SDS) up and running. One additional component that we want to add is a Load Balancer as a Service (LBaaS), which is a key requirement for pretty much any high available type of workload and a must-have feature in any cloud.
OpenStack’s current official LBaaS component is called Octavia, which replaced the older Neutron LBaaS v1 project, starting with the Liberty release. Deploying and configuring requires a few steps, which explains the need for a dedicated blog post.
Octavia’s reference implementation uses VM instances called Amphorae to perform the actual load balancing. The octavia-worker service takes care of communicating with the amphorae and to do that we need to generate a few X509 CAs and certificates used to secure the communications. The good news is that starting with the Victoria release, kolla-ansible simplifies a lot this task. Here’s how to:
1 2 3 4 5 6 7 8 9 10 11 12 |
# Change the following according to your organization echo "octavia_certs_country: US" | sudo tee -a /etc/kolla/globals.yml echo "octavia_certs_state: Oregon" | sudo tee -a /etc/kolla/globals.yml echo "octavia_certs_organization: OpenStack" | sudo tee -a /etc/kolla/globals.yml echo "octavia_certs_organizational_unit: Octavia" | sudo tee -a /etc/kolla/globals.yml # This is the kolla-ansible virtual env created in the previous blog post cd kolla source venv/bin/activate sudo chown $USER:$USER /etc/kolla kolla-ansible octavia-certificates |
The communication between Octavia and the Amphorae needs an isolated network, as we don’t want to share it with the tenant network for security reasons. A simple way to accomplish that is to create a provider network with a dedicated VLAN ID, which is why we enabled Neutron provider networks and OVS VLAN segmentation in the previous post. Again, starting with Victoria, this got much easier with kolla-ansible.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
# This is a dedicated network, outside your management LAN address space, change as needed OCTAVIA_MGMT_SUBNET=192.168.43.0/24 OCTAVIA_MGMT_SUBNET_START=192.168.43.10 OCTAVIA_MGMT_SUBNET_END=192.168.43.254 OCTAVIA_MGMT_HOST_IP=192.168.43.1/24 OCTAVIA_MGMT_VLAN_ID=107 sudo tee -a /etc/kolla/globals.yml << EOT octavia_amp_network: name: lb-mgmt-net provider_network_type: vlan provider_segmentation_id: $OCTAVIA_MGMT_VLAN_ID provider_physical_network: physnet1 external: false shared: false subnet: name: lb-mgmt-subnet cidr: "$OCTAVIA_MGMT_SUBNET" allocation_pool_start: "$OCTAVIA_MGMT_SUBNET_START" allocation_pool_end: "$OCTAVIA_MGMT_SUBNET_END" gateway_ip: "$OCTAVIA_MGMT_HOST_IP" enable_dhcp: yes EOT |
Unless there is a dedicated network adapter, a virtual ethernet one can be used. This needs to be configured at boot and added to the OVS br-ex switch.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
# This sets up the VLAN veth interface # Netplan doesn't have support for veth interfaces yet sudo tee /usr/local/bin/veth-lbaas.sh << EOT #!/bin/bash sudo ip link add v-lbaas-vlan type veth peer name v-lbaas sudo ip addr add $OCTAVIA_MGMT_HOST_IP dev v-lbaas sudo ip link set v-lbaas-vlan up sudo ip link set v-lbaas up EOT sudo chmod 744 /usr/local/bin/veth-lbaas.sh sudo tee /etc/systemd/system/veth-lbaas.service << EOT [Unit] After=network.service [Service] ExecStart=/usr/local/bin/veth-lbaas.sh [Install] WantedBy=default.target EOT sudo chmod 644 /etc/systemd/system/veth-lbaas.service sudo systemctl daemon-reload sudo systemctl enable veth-lbaas.service sudo systemctl start veth-lbaas.service docker exec openvswitch_vswitchd ovs-vsctl add-port \ br-ex v-lbaas-vlan tag=$OCTAVIA_MGMT_VLAN_ID |
A few more Octavia kolla-ansible configurations…
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
echo "enable_octavia: \"yes\"" | sudo tee -a /etc/kolla/globals.yml echo "octavia_network_interface: v-lbaas" | sudo tee -a /etc/kolla/globals.yml # Flavor used when booting an amphora, change as needed sudo tee -a /etc/kolla/globals.yml << EOT octavia_amp_flavor: name: "amphora" is_public: no vcpus: 1 ram: 1024 disk: 5 EOT sudo mkdir /etc/kolla/config/octavia # Use a config drive in the Amphorae for cloud-init sudo tee /etc/kolla/config/octavia/octavia-worker.conf << EOT [controller_worker] user_data_config_drive = true EOT |
…and we can finally tell kolla-ansible to deploy Octavia:
1 |
kolla-ansible -i all-in-one deploy --tags common,horizon,octavia |
Octavia uses a special VM image for the Amphorae, which needs to be built for ARM64. We prepared Dockerfiles for building either an Ubuntu or a CentOS image, you can choose either one in the following snippets. We use containers to perform the build in order to isolate the requirements and be independent from the host OS.
1 2 3 4 5 6 7 8 9 10 11 12 |
git clone https://github.com/cloudbase/openstack-kolla-arm64-scripts cd openstack-kolla-arm64-scripts/victoria # Choose either Ubuntu or CentOS (not both!) # Ubuntu docker build amphora-image-arm64-docker -f amphora-image-arm64-docker/Dockerfile.Ubuntu \ -t amphora-image-build-arm64-ubuntu # Centos docker build amphora-image-arm64-docker -f amphora-image-arm64-docker/Dockerfile.Centos \ -t amphora-image-build-arm64-centos |
ARM64 needs a trivial patch in the diskimage-create.sh build script (we also submitted it upstream):
1 2 3 4 5 6 |
git clone https://opendev.org/openstack/octavia -b stable/victoria # Use latest branch Octavia to create Ubuntu image cd octavia # diskimage-create.sh includes armhf but not arm64 git apply ../0001-Add-arm64-in-diskimage-create.sh.patch cd .. |
Build the image (this will take a bit):
1 2 3 4 5 6 7 8 9 10 11 |
# Again, choose either Ubuntu or CentOS (not both!) # Note the mount of /mnt and /proc in the docker container # BEWARE!!!!! Without mounting /proc, the diskimage-builder fails to find mount points and deletes the host's /dev, # making the host unusable docker run --privileged -v /dev:/dev -v /proc:/proc -v /mnt:/mnt \ -v $(pwd)/octavia/:/octavia -ti amphora-image-build-arm64-ubuntu # Create CentOS 8 Amphora image docker run --privileged -v /dev:/dev -v $(pwd)/octavia/:/octavia \ -ti amphora-image-build-arm64-centos |
Add the image to Glance, using the octavia user in the service project. The amphora tag is used by Octavia to find the image.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
. /etc/kolla/admin-openrc.sh # Switch to the octavia user and service project export OS_USERNAME=octavia export OS_PASSWORD=$(grep octavia_keystone_password /etc/kolla/passwords.yml | awk '{ print $2}') export OS_PROJECT_NAME=service export OS_TENANT_NAME=service openstack image create amphora-x64-haproxy.qcow2 \ --container-format bare \ --disk-format qcow2 \ --private \ --tag amphora \ --file octavia/diskimage-create/amphora-x64-haproxy.qcow2 # We can now delete the image file rm -f octavia/diskimage-create/amphora-x64-haproxy.qcow2 |
Currently, we need a small patch in Octavia to properly render the userdata for the Amphorae:
1 2 3 4 5 6 |
# Patch the user_data_config_drive_template cd octavia git apply ../0001-Fix-userdata-template.patch # For now just update the octavia-worker container, no need to restart it docker cp octavia/common/jinja/templates/user_data_config_drive.template \ octavia_worker:/usr/lib/python3/dist-packages/octavia/common/jinja/templates/user_data_config_drive.template |
Finally, let’s create a load balancer to make sure everything works fine:
1 2 3 4 5 6 7 |
# To create the loadbalancer . /etc/kolla/admin-openrc.sh openstack loadbalancer create --name loadbalancer1 --vip-subnet-id public1-subnet # Check the status until it's marked as ONLINE openstack loadbalancer list |
Congratulations! You have a working LBaaS in your private cloud!!
Troubleshooting
In case something goes wrong, finding the root cause might be tricky. Here are a few suggestions to ease up the process.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
# Check for errors sudo tail -f /var/log/kolla/octavia/octavia-worker.log # SSH into amphora # Get amphora VM IP either from the octavia-worker.log or from: openstack server list --all-projects ssh ubuntu@<amphora_ip> -i octavia_ssh_key #ubuntu ssh cloud-user@<amphora_ip> -i octavia_ssh_key #centos # Instances stuck in pending create cannot be deleted # Password: grep octavia_database_password /etc/kolla/passwords.yml docker exec -ti mariadb mysql -u octavia -p octavia update load_balancer set provisioning_status = 'ERROR' where provisioning_status = 'PENDING_CREATE'; exit; |