OpenStack Stein Installation / Using Kolla-Ansible / Ubuntu 18.04, ODROID-H2 Cluster Environment
1. Installation Environment
![[Figure 1] OpenStack Stein Installation Environment (ODROID-H2 Cluster)](/blog-software/docs/record/openstack-stein-installation-kolla-ansible-ubuntu-18.04-odroid-h2/images/environment.png)
[Figure 1] OpenStack Stein Installation Environment (ODROID-H2 Cluster)
[Figure 1] shows the OpenStack installation environment based on an ODROID-H2 cluster. Detailed environment information is as follows:
- OpenStack : Stein
- Kolla : 8.0.0
- Kolla-Ansible : 8.0.0
- Octiava : 4.0.1
- Node : Ubuntu 18.04, root user
- ODROID-H2
- Node 01 : Controller Node, Network Node, Ceph Node (MON, MGR, OSD)
- Node 02, 03 : Compute Node, Ceph Node (OSD)
- VM
- Node 09 : Monitoring Node, Registry Node, Deploy Node
- ODROID-H2
- Network
- NAT Network : External Network (Provider Network), 192.168.0.0/24
- Floating IP Range : 192.168.0.200 ~ 224
- Private Network : Guest Network (Tenant Network), Management Network, 10.0.0.0/24
- Node Default Gateway
- NAT Network : External Network (Provider Network), 192.168.0.0/24
- Storage
- /dev/mmcblk0 : Root Filesystem, 64GB
- /dev/nvme0n1 : Ceph, 256GB
2. OpenStack Components
The components to be installed among OpenStack components are as follows:
- Nova : Provides VM Service.
- Neutron : Provides Network Service.
- Octavia : Provides Load Balancer Service.
- Keystone : Provides Authentication and Authorization Service.
- Glance : Provides VM Image Service.
- Cinder : Provides VM Block Storage Service.
- Horizon : Provides Web Dashboard Service.
- Prometheus : Stores metric information.
- Grafana : Visualizes metric information stored in Prometheus in various graphs.
- Ceph : Acts as backend storage for Glance and Cinder.
3. Network Configuration
3.1. Node01 Node
| |
Configure the IP of the Node01 interface.
3.2. Node02 Node
| |
Configure the IP of the Node02 interface.
3.3. Node03 Node
| |
Configure the IP of the Node03 interface.
3.4. Node09 Node
| |
Configure the IP of the Node04 interface.
4. Package Installation
4.1. Deploy Node
(Deploy)$ apt-get install software-properties-common
(Deploy)$ apt-add-repository ppa:ansible/ansible
(Deploy)$ apt-get update
(Deploy)$ apt-get install ansible python-pip python3-pip libguestfs-tools
(Deploy)$ pip install kolla==8.0.0 kolla-ansible==8.0.0 tox gitpython pbr requests jinja2 oslo_config
(Deploy)$ pip install python-openstackclient python-glanceclient python-neutronclientInstall Ansible, Kolla-ansible, and Ubuntu and Python packages required for building Kolla container images on the Deploy Node. Also install the OpenStack CLI client.
4.2. Registry Node
(Registry)$ apt-get install docker-ceInstall Docker on the Registry Node to run the Registry Node.
4.3. Network, Compute Node
(Network, Compute)$ apt-get remove --purge openvswitch-switchIf the Open vSwitch package is installed, remove it to eliminate Open vSwitch running on the host. Open vSwitch-related daemons must run only in containers. Running Open vSwitch-related daemons simultaneously on the host and in containers will cause improper operation.
4.4. All Node
(All Node)$ apt-get install ifupdown
(All Node)$ apt-get remove --purge netplan.ioInstall ifupdown and remove netplan.
5. Ansible Configuration
Configure SSH access from the Deploy Node to other nodes without a password.
(Deploy)$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Sp0SUDPNKxTIYVObstB0QQPoG/csF9qe/v5+S5e8hf4 root@kube02
The key's randomart image is:
+---[RSA 2048]----+
| oBB@= |
| .+o+.*o |
| .. o.+ . |
| o..ooo.. |
| +.=ooS |
| . o.=o . o |
| +.. . = .|
| o ..o o |
| ..oooo...o.E|
+----[SHA256]-----+Generate an SSH key on the Deploy Node. Enter a blank for the passphrase (password) to not set it. If set, you will need to enter the passphrase each time accessing other nodes from the Deploy Node via SSH.
(Deploy)$ ssh-copy-id root@10.0.0.11
(Deploy)$ ssh-copy-id root@10.0.0.12
(Deploy)$ ssh-copy-id root@10.0.0.13
(Deploy)$ ssh-copy-id root@10.0.0.19Copy the generated SSH public key to the ~/.ssh/authorized_keys file of the remaining nodes using the ssh-copy-id command.
| |
Modify the /etc/hosts file on the Deploy Node as shown in [Text 5].
| |
Modify the /etc/ansible/ansible.cfg file on the Deploy Node as shown in [Text 6].
6. Kolla-Ansible Configuration
(Deploy)$ mkdir -p ~/kolla-ansible
(Deploy)$ cp /usr/local/share/kolla-ansible/ansible/inventory/* ~/kolla-ansible/
(Deploy)$ mkdir -p /etc/kolla
(Deploy)$ cp -r /usr/local/share/kolla-ansible/etc_examples/kolla/* /etc/kollaCopy inventory files. Also copy the global.yaml config file and the passwords.yml file containing password information.
| |
Configure the Ansible inventory. Change the ~/kolla-ansible/multinode file on the Deploy Node to the contents of [Text 7]. Only the [control], [network], [external-compute], [monitoring], [storage], [deployment] sections at the top of the ~/kolla-ansible/multinode file have been modified to match the ODROID-H2 cluster environment, and the rest of the file’s lower sections remain with default settings.
| |
Enter password information used by OpenStack. Modify the /etc/kolla/passwords.yml file on the Deploy Node as shown in [Text 8]. Most passwords are set to admin.
| |
Configure Kolla-Ansible. Modify the /etc/kolla/globals.yml file on the Deploy Node as shown in [Text 9]. Since Octavia can only be configured after running OpenStack at least once, the Octavia settings are left commented out.
(Deploy)$ kolla-ansible -i ~/kolla-ansible/multinode bootstrap-serversInstall the required Ubuntu and Python packages on each node using Kolla Ansible bootstrap-servers.
7. Docker Configuration
7.1. Registry Node
(Registry)$ mkdir ~/auth
(Registry)$ docker run --entrypoint htpasswd registry:2 -Bbn admin admin > ~/auth/htpasswd
(Registry)$ docker run -d -p 5000:5000 --restart=always --name registry_private -v ~/auth:/auth -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" registry:2Start the Docker Registry on the Registry Node. Set ID/Password to admin/admin.
7.2. All Node
| |
(All)$ service docker restartRegister the Docker Registry running on the Registry Node as an insecure registry for all Docker daemons running on nodes. Create the /etc/systemd/system/docker.service.d/kolla.conf file on all nodes with the contents of [Text 10], then restart Docker.
8. Octavia Certificate Configuration
(Network)$ git clone -b 4.0.1 https://github.com/openstack/octavia.git
(Network)$ cd octavia
(Network)$ sed -i 's/foobar/admin/g' bin/create_certificates.sh
(Network)$ ./bin/create_certificates.sh cert $(pwd)/etc/certificates/openssl.cnf
(Network)$ mkdir -p /etc/kolla/config/octavia
(Network)$ cp cert/private/cakey.pem /etc/kolla/config/octavia/
(Network)$ cp cert/ca_01.pem /etc/kolla/config/octavia/
(Network)$ cp cert/client.pem /etc/kolla/config/octavia/Generate certificates used by Octavia on the Network Node.
9. Ceph Configuration
(Ceph)$ parted /dev/nvme0n1 -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS 1 -1
(Ceph)$ printf 'KERNEL=="nvme0n1p1", SYMLINK+="nvme0n11"\nKERNEL=="nvme0n1p2", SYMLINK+="nvme0n12"' > /etc/udev/rules.d/local.rulesLabel the /dev/nvme0n1 block device on Ceph nodes with KOLLA_CEPH_OSD_BOOTSTRAP_BS. Kolla-Ansible is configured to use block devices labeled with KOLLA_CEPH_OSD_BOOTSTRAP_BS for OSDs. Due to a bug in Kolla-Ansible’s role, when using NVME as Ceph storage, there is a bug that references incorrect partition names. To resolve this issue, create partition symbolic links via udev.
10. Kolla Container Image Creation and Push
(Deploy)$ cd ~
(Deploy)$ git clone -b 8.0.0 https://github.com/openstack/kolla.git
(Deploy)$ cd kolla
(Deploy)$ tox -e genconfig
(Deploy)$ docker login 10.0.0.19:5000
(Deploy)$ mkdir -p logs
(Deploy)$ python tools/build.py -b ubuntu --tag stein --skip-parents --skip-existing --type source --registry 10.0.0.19:5000 --push --logs-dir logsCreate Kolla container images and push them to the registry. Images are created based on Ubuntu images.
11. OpenStack Deployment Using Kolla-Ansible
(Deploy)$ kolla-ansible -i ~/kolla-ansible/multinode prechecks
(Deploy)$ kolla-ansible -i ~/kolla-ansible/multinode deployDeploy OpenStack to start OpenStack.
12. OpenStack Initialization
(Deploy)$ kolla-ansible post-deploy
(Deploy)$ . /etc/kolla/admin-openrc.sh
(Deploy)$ . /usr/local/share/kolla-ansible/init-runoncePerform OpenStack initialization. After initialization is complete, services such as Network, Image, and Flavor are initialized.
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://10.0.0.20:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2After initialization is complete, you can check the /etc/kolla/admin-openrc.sh file with the contents of [Shell 1].
13. External Network and Octavia Network Creation
(Deploy)$ . /etc/kolla/admin-openrc.sh
(Deploy)$ openstack port list
(Deploy)$ openstack router remove port demo-router [Port ID]
(Deploy)$ openstack router delete demo-router
(Deploy)$ openstack network delete public1
(Deploy)$ openstack network delete demo-netDelete all networks and routers created by the init-runonce script.
(Deploy)$ . /etc/kolla/admin-openrc.sh
(Deploy)$ openstack router create external-router
(Deploy)$ openstack network create --share --external --provider-physical-network physnet1 --provider-network-type flat external-net
(Deploy)$ openstack subnet create --network external-net --allocation-pool start=192.168.0.200,end=192.168.0.224 --dns-nameserver 8.8.8.8 --gateway 192.168.0.1 --subnet-range 192.168.0.0/24 external-sub
(Deploy)$ openstack router set --external-gateway external-net --enable-snat --fixed-ip subnet=external-sub,ip-address=192.168.0.225 external-routerCreate External Router, External Network, and External Subnet, and connect the External Network to the External Router. Configure the External Router to perform SNAT.
(Deploy)$ openstack network create --share --provider-network-type vxlan octavia-net
(Deploy)$ openstack subnet create --network octavia-net --dns-nameserver 8.8.8.8 --gateway 20.0.0.1 --subnet-range 20.0.0.0/24 octavia-sub
(Deploy)$ openstack router add subnet external-router octavia-subCreate Octavia Network and Octavia Subnet and connect them to the External Network.
(Controller)$ route add -net 20.0.0.0/24 gw 192.168.0.225
(Controller)$ printf '#!/bin/bash\nroute add -net 20.0.0.0/24 gw 192.168.0.225' > /etc/rc.local
(Controller)$ chmod +x /etc/rc.localAdd a routing rule on the Controller Node so that packets with Octavia Network IP as the destination IP are sent to the External Router when sending packets to the NAT network.
14. VM Image Registration in Glance
(Deploy)$ . /etc/kolla/admin-openrc.sh
(Deploy)$ cd ~/kolla-ansible
(Deploy)$ wget http://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
(Deploy)$ guestmount -a bionic-server-cloudimg-amd64.img -m /dev/sda1 /mnt
(Deploy)$ chroot /mnt
(Deploy / chroot)$ passwd root
(Deploy / chroot)$ sed -i -e 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
(Deploy / chroot)$ sed -i -e 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
(Deploy / chroot)$ sync
(Deploy / chroot)$ exit
(Deploy)$ umount /mnt
(Deploy)$ openstack image create --disk-format qcow2 --container-format bare --public --file ./bionic-server-cloudimg-amd64.img ubuntu-18.04After downloading the Ubuntu image, configure the root account and SSHD settings. Register the configured Ubuntu image in Glance.
(Deploy)$ . /etc/kolla/admin-openrc.sh
(Deploy)$ export OS_USERNAME=octavia
(Deploy)$ cd ~
(Deploy)$ git clone -b 4.0.1 https://github.com/openstack/octavia.git
(Deploy)$ cd octavia/diskimage-create
(Deploy)$ ./diskimage-create.sh -r root
(Deploy)$ openstack image create --disk-format qcow2 --container-format bare --public --tag amphora --file ./amphora-x64-haproxy.qcow2 ubuntu-16.04-amphoraCreate the Octavia Amphora image as the octavia user and register it in Glance. The tag must be set to amphora.
15. Octavia Flavor, Keypair, Security Group Configuration and Octavia Deployment
(Deploy)$ . /etc/kolla/admin-openrc.sh
(Deploy)$ export OS_USERNAME=octavia
(Deploy)$ openstack flavor create --id 100 --vcpus 2 --ram 2048 --disk 10 "m1.amphora" --publicCreate a flavor for the Octavia Amphora VM as the octavia user. Since the Flavor ID is planned to be set to 100, the Flavor ID must be created as 100.
(Deploy)$ . /etc/kolla/admin-openrc.sh
(Deploy)$ export OS_USERNAME=octavia
(Deploy)$ openstack keypair create -- octavia_ssh_key Create the octavia_ssh_key keypair as the octavia user. The keypair name must be created as octavia_ssh_key.
(Deploy)$ . /etc/kolla/admin-openrc.sh
(Deploy)$ export OS_USERNAME=octavia
(Deploy)$ openstack security group create octavia-sec
(Deploy)$ openstack security group rule create --protocol icmp octavia-sec
(Deploy)$ openstack security group rule create --protocol tcp --dst-port 22 octavia-sec
(Deploy)$ openstack security group rule create --protocol tcp --dst-port 9443 octavia-secCreate the octavia-sec security group as the octavia user.
| |
Modify the /etc/kolla/globals.yml file as shown in [Text 11] to configure Octavia by removing the Octavia configuration comments. Enter the ID of the octavia-net network created above in octavia_amp_boot_network_list. Enter the ID of the octavia-sec security group created above in octavia_amp_secgroup_list.
(Deploy)$ kolla-ansible -i ~/kolla-ansible/multinode deploy -t octaviaDeploy only Octavia.
16. Initialization for Reinstallation
(Deploy)$ kolla-ansible -i ~/kolla-ansible/multinode destroy --yes-i-really-really-mean-it Delete all OpenStack containers.
(Ceph)$ parted /dev/nvme0n1 rm 1
(Ceph)$ parted /dev/nvme0n1 rm 2
(Ceph)$ reboot now
(Ceph)$ parted /dev/nvme0n1 -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS 1 -1Initialize OSD blocks on all Ceph nodes.
17. Dashboard Information
Accessible dashboard information is as follows. Listed in order of URL, ID, Password.
- Horizon : http://10.0.0.20:80, admin, admin
- RabbitMQ : http://10.0.0.20:15672, openstack, admin
- Prometheus : http://10.0.0.20:9091
- Grafana : http://10.0.0.20:3000, admin, admin
- Alertmanager : http://10.0.0.20:9093, admin, admin
18. Debugging
(Node01)$ ls /var/log/kolla
ansible.log ceph chrony cinder glance horizon keystone mariadb neutron nova octavia openvswitch prometheus rabbitmqLogs for OpenStack services are stored in the /var/log/kolla directory on each node.
19. References
- https://docs.openstack.org/kolla/stein/
- https://docs.openstack.org/kolla-ansible/stein/
- https://shreddedbacon.com/post/openstack-kolla/
- https://docs.oracle.com/cd/E90981_01/E90982/html/kolla-openstack-network.html
- https://github.com/osrg/openvswitch/blob/master/debian/openvswitch-switch.README.Debian
- https://blog.zufardhiyaulhaq.com/manual-instalation-octavia-openstack-queens/
- http://www.panticz.de/openstack-octavia-loadbalancer