Platform and cloud interoperability has come a long way. IaaS and unstructured PaaS options such as OpenStack and Kubernetes can be combined to create cloud-native applications. In this port we’re going to show how Kubernetes can de deployed on an OpenStack cloud infrastructure.
Setup
My setup is quite simple, an Ocata all-in-one deployment with compute KVM. The OpenStack infrastructure was deployed with Kolla. The deployment method is not important here, but Magnum and Heat need to be deployed alongside other OpenStack services such as Nova or Neutron. To do this, enable those two services form /etc/kolla/global.yml file. If you are using Devstack, here is a local.conf that is deploying Heat and Magnum.
Kubernetes deployment
The Kubernetes cluster will consist of 1 master node and 2 minion nodes. I’m going to use Fedora atomic images for VMs. One useful info is that I used a 1 CPU, 2GB of RAM and 7GB disk flavor for the VMs. Below are the commands used to create the necessary environment setup. Please make sure to change IPs and different configurations to suit your environment.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
# Download the cloud image wget https://ftp-stud.hs-esslingen.de/pub/Mirrors/alt.fedoraproject.org/atomic/stable/Fedora-Atomic-25-20170512.2/CloudImages/x86_64/images/Fedora-Atomic-25-20170512.2.x86_64.qcow2 # If using HyperV, convert it to VHD format qemu-img convert -f qcow2 -O vhdx Fedora-Atomic-25-20170512.2.x86_64.qcow2 fedora-atomic.vhdx # Provision the cloud image, I'm using KVM so using the qcow2 image openstack image create --public --property os_distro='fedora-atomic' --disk-format qcow2 \ --container-format bare --file /root/Fedora-Atomic-25-20170512.2.x86_64.qcow2 \ fedora-atomic.qcow2 # Create a flavor nova flavor-create cloud.flavor auto 2048 7 1 --is-public True # Create a key pair openstack keypair create --public-key ~/.ssh/id_rsa.pub kolla-ubuntu # Create Neutron networks # Public network neutron net-create public_net --shared --router:external --provider:physical_network \ physnet2 --provider:network_type flat neutron subnet-create public_net 10.7.15.0/24 --name public_subnet \ --allocation-pool start=10.7.15.150,end=10.7.15.180 --disable-dhcp --gateway 10.7.15.1 # Private network neutron net-create private_net_vlan --provider:segmentation_id 500 \ --provider:physical_network physnet1 --provider:network_type vlan neutron subnet-create private_net_vlan 10.10.20.0/24 --name private_subnet \ --allocation-pool start=10.10.20.50,end=10.10.20.100 \ --dns-nameserver 8.8.8.8 --gateway 10.10.20.1 # Create a router neutron router-create router1 neutron router-interface-add router1 private_subnet neutron router-gateway-set router1 public_net |
Before the Kubernetes cluster is deployed, a cluster template must be created. The nice thing about this process is that Magnum does not require long config files or definitions for this. A simple cluster template creation can look like this:
1 2 3 |
magnum cluster-template-create --name k8s-cluster-template --image fedora-atomic \ --keypair kolla-controller --external-network public_net --dns-nameserver 8.8.8.8 \ --flavor cloud.flavor --docker-volume-size 3 --network-driver flannel --coe kubernetes |
Based on this template the cluster can be deployed:
1 2 |
magnum cluster-create --name k8s-cluster --cluster-template k8s-cluster-template \ --master-count 1 --node-count 2 |
The deployment status can be checked and viewed from Horizon. There are two places where this can be done, first one in Container Infra -> Clusters tab and second in Orchestration -> Staks tab. This is because Magnum relies on Heat templates to deploy the user defined resources. I find the the Stacks option better because it allows the user to see all the resources and events involved in the process. If something goes wrong, the issue can easily be identified by a red mark.
In the end my cluster should look something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
root@kolla-ubuntu-cbsl:~# magnum cluster-show 2ffb0ea6-d3f6-494c-9001-c4c4e01e8125 +---------------------+------------------------------------------------------------+ | Property | Value | +---------------------+------------------------------------------------------------+ | status | CREATE_COMPLETE | | cluster_template_id | 595cdb6c-8032-43c8-b546-710410061be0 | | node_addresses | ['10.7.15.112', '10.7.15.113'] | | uuid | 2ffb0ea6-d3f6-494c-9001-c4c4e01e8125 | | stack_id | 91001f55-f1e8-4214-9d71-1fa266845ea2 | | status_reason | Stack CREATE completed successfully | | created_at | 2017-07-20T16:40:45+00:00 | | updated_at | 2017-07-20T17:07:24+00:00 | | coe_version | v1.5.3 | | keypair | kolla-controller | | api_address | https://10.7.15.108:6443 | | master_addresses | ['10.7.15.108'] | | create_timeout | 60 | | node_count | 2 | | discovery_url | https://discovery.etcd.io/89bf7f8a044749dd3befed959ea4cf6d | | master_count | 1 | | container_version | 1.12.6 | | name | k8s-cluster | +---------------------+------------------------------------------------------------+ |
SSH into the master node to check the cluster status
1 2 3 4 5 |
[root@kubemaster ~]# kubectl cluster-info Kubernetes master is running at http://localhost:8080 KubeUI is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. |
So there it is, a fully functioning Kubernetes cluster with 1 master and 2 minion nodes.
A word on networking
Kubernetes networking is not the easiest thing to explaing but I’ll do my best to do the essentials. After an app is deployed, the user will need to access it from outside the Kubernetes Cluster. This is done with Services. To achive this, on each minion node there is a kube-proxy service running that will allow the Service to do its job. Now the service can work in multiple ways, some of them are via an VIP LoadBalancer IP provided by the cloud underneath K8S, or with port-forward on the minion node IP.
Deploy an app
Now that all is set up, an app can be deployed. I am going to install WordPress with Helm. Helm is the package manager for Kubernetes. It installs applications with charts, which are basically apps definitions written in yaml. Here are documentation on how to install Helm.
I am going to install WordPress.
1 |
[root@kubemaster ~]# helm install stable/wordpress |
Pods can be seen
1 2 3 4 |
[root@kubemaster ~]# kubectl get pods NAME READY STATUS RESTARTS AGE my-release-mariadb-2689551905-56580 1/1 Running 0 10m my-release-wordpress-3324251581-gzff5 1/1 Running 0 10m |
There are multiple ways of accessing the contents of a pod. I am going to port-forward 8080 port from the master node to the 80 port of the pod.
1 |
kubectl port-forward my-release-wordpress-3324251581-gzff5 8080:80 |
Now WordPress can be accessed via the Kubernetes node IP and port 8080
1 |
http://K8S-IP:8080 |
Kubernetes on OpenStack is not only possible, it can also be easy!