Description of problem: With the recent issues regarding how CNO detects the Master ports to create members for the API Load Balancer and issues with the Amphora of that Load Balancer, Kuryr needed to be able to manage that respective Service. It now creates the Load Balancer for the default Kubernetes Service with the Octavia driver that is configured. In this sense, the following scenarios needs to be checked: * Installation with only Amphora driver enabled * Installation with OVN and Amphora driver enabled * Upgrade from 4.7 to 4.8 when only Amphora driver is enabled * Upgrade from 4.7 to 4.8 when OVN driver is enabled Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Verification scenarios: ---------------------- **************************************************** 1. 4.8 installation with only Amphora driver enabled **************************************************** Verified in OCP 4.8.0-0.nightly-2021-05-25-041803 on top of OSP 13.0.15 (2021-03-24.1) with OVS. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-05-25-041803 True False 23h Cluster version is 4.8.0-0.nightly-2021-05-25-041803 $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.0-0.nightly-2021-05-25-041803 True False False 24h baremetal 4.8.0-0.nightly-2021-05-25-041803 True False False 24h cloud-credential 4.8.0-0.nightly-2021-05-25-041803 True False False 24h cluster-autoscaler 4.8.0-0.nightly-2021-05-25-041803 True False False 24h config-operator 4.8.0-0.nightly-2021-05-25-041803 True False False 24h console 4.8.0-0.nightly-2021-05-25-041803 True False False 24h csi-snapshot-controller 4.8.0-0.nightly-2021-05-25-041803 True False False 24h dns 4.8.0-0.nightly-2021-05-25-041803 True False False 24h etcd 4.8.0-0.nightly-2021-05-25-041803 True False False 24h image-registry 4.8.0-0.nightly-2021-05-25-041803 True False False 24h ingress 4.8.0-0.nightly-2021-05-25-041803 True False False 24h insights 4.8.0-0.nightly-2021-05-25-041803 True False False 24h kube-apiserver 4.8.0-0.nightly-2021-05-25-041803 True False False 24h kube-controller-manager 4.8.0-0.nightly-2021-05-25-041803 True False False 24h kube-scheduler 4.8.0-0.nightly-2021-05-25-041803 True False False 24h kube-storage-version-migrator 4.8.0-0.nightly-2021-05-25-041803 True False False 24h machine-api 4.8.0-0.nightly-2021-05-25-041803 True False False 24h machine-approver 4.8.0-0.nightly-2021-05-25-041803 True False False 24h machine-config 4.8.0-0.nightly-2021-05-25-041803 True False False 24h marketplace 4.8.0-0.nightly-2021-05-25-041803 True False False 24h monitoring 4.8.0-0.nightly-2021-05-25-041803 True False False 3h34m network 4.8.0-0.nightly-2021-05-25-041803 True False False 24h node-tuning 4.8.0-0.nightly-2021-05-25-041803 True False False 24h openshift-apiserver 4.8.0-0.nightly-2021-05-25-041803 True False False 24h openshift-controller-manager 4.8.0-0.nightly-2021-05-25-041803 True False False 56m openshift-samples 4.8.0-0.nightly-2021-05-25-041803 True False False 24h operator-lifecycle-manager 4.8.0-0.nightly-2021-05-25-041803 True False False 24h operator-lifecycle-manager-catalog 4.8.0-0.nightly-2021-05-25-041803 True False False 24h operator-lifecycle-manager-packageserver 4.8.0-0.nightly-2021-05-25-041803 True False False 24h service-ca 4.8.0-0.nightly-2021-05-25-041803 True False False 24h storage 4.8.0-0.nightly-2021-05-25-041803 True False False 24h As it's OSP 13 with OVS only the Amphora provider is enabled: $ openstack loadbalancer provider list +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | +---------+-------------------------------------------------+ The default kubernetes service LB is created with the amphora provider: $ openstack loadbalancer list | grep kubernetes +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ | c393e24b-7c60-4f16-9c36-c14a8a55c3af | default/kubernetes | 75990be152764ca1ad084b79d9f2dd6e | 172.30.0.1 | ACTIVE | amphora | +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ $ openstack loadbalancer pool list --loadbalancer c393e24b-7c60-4f16-9c36-c14a8a55c3af +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+--------------+----------------+ | id | name | project_id | provisioning_status | protocol | lb_algorithm | admin_state_up | +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+--------------+----------------+ | c0e2e43b-6acb-4e19-9b68-34bea1cb6e58 | default/kubernetes:TCP:443 | 75990be152764ca1ad084b79d9f2dd6e | ACTIVE | TCP | ROUND_ROBIN | True | +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+--------------+----------------+ $ openstack loadbalancer member list c0e2e43b-6acb-4e19-9b68-34bea1cb6e58 +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | ce73e700-ec88-4467-a01c-1eab639c9e7e | default/kubernetes:6443 | 75990be152764ca1ad084b79d9f2dd6e | ACTIVE | 10.196.3.76 | 6443 | ONLINE | 1 | | fd7db8ac-e289-4495-b67a-2dc37bb32368 | default/kubernetes:6443 | 75990be152764ca1ad084b79d9f2dd6e | ACTIVE | 10.196.3.157 | 6443 | ONLINE | 1 | | 50d0c378-64e6-4858-9d09-f2eaeefd56cd | default/kubernetes:6443 | 75990be152764ca1ad084b79d9f2dd6e | ACTIVE | 10.196.3.209 | 6443 | ONLINE | 1 | +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ $ oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ostest-mpql9-master-0 Ready master 23h v1.21.0-rc.0+ee60d07 10.196.3.209 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-mpql9-master-1 Ready master 23h v1.21.0-rc.0+ee60d07 10.196.3.76 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-mpql9-master-2 Ready master 23h v1.21.0-rc.0+ee60d07 10.196.3.157 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-mpql9-worker-0-j9psl Ready worker 23h v1.21.0-rc.0+ee60d07 10.196.0.183 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-mpql9-worker-0-nhvfp Ready worker 23h v1.21.0-rc.0+ee60d07 10.196.1.183 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-mpql9-worker-0-rgfsm Ready worker 23h v1.21.0-rc.0+ee60d07 10.196.1.134 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 $ openstack loadbalancer healthmonitor list +--------------------------------------+----------------------------+----------------------------------+------+----------------+ | id | name | project_id | type | admin_state_up | +--------------------------------------+----------------------------+----------------------------------+------+----------------+ | fe4504f9-527b-4455-b25a-c5bd0bba8d71 | default/kubernetes:TCP:443 | 75990be152764ca1ad084b79d9f2dd6e | TCP | True | +--------------------------------------+----------------------------+----------------------------------+------+----------------+ $ openstack loadbalancer listener list | grep kubernetes +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ | id | default_pool_id | name | project_id | protocol | protocol_port | admin_state_up | +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ | 72d93eb3-e3fb-4b8a-9238-9b9e3dc3863a | c0e2e43b-6acb-4e19-9b68-34bea1cb6e58 | default/kubernetes:TCP:443 | 75990be152764ca1ad084b79d9f2dd6e | TCP | 443 | True | +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ $ openstack loadbalancer listener show 72d93eb3-e3fb-4b8a-9238-9b9e3dc3863a +-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2021-05-26T08:51:42 | | default_pool_id | c0e2e43b-6acb-4e19-9b68-34bea1cb6e58 | | default_tls_container_ref | None | | description | | | id | 72d93eb3-e3fb-4b8a-9238-9b9e3dc3863a | | insert_headers | None | | l7policies | | | loadbalancers | c393e24b-7c60-4f16-9c36-c14a8a55c3af | | name | default/kubernetes:TCP:443 | | operating_status | ONLINE | | project_id | 75990be152764ca1ad084b79d9f2dd6e | | protocol | TCP | | protocol_port | 443 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 600000 | | timeout_member_connect | 5000 | | timeout_member_data | 600000 | | timeout_tcp_inspect | 0 | | updated_at | 2021-05-26T08:58:44 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+ The console UI in https://console-openshift-console.apps.ostest.shiftstack.com/ works fine. ******************************************************* 2. 4.8 installation with OVN and Amphora driver enabled ******************************************************* Verified in OCP 4.8.0-0.nightly-2021-05-25-041803 on top of OSP 16.1.5 (RHOS-16.1-RHEL-8-20210323.n.0) with OVN. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-05-25-041803 True False 2d Cluster version is 4.8.0-0.nightly-2021-05-25-041803 $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.0-0.nightly-2021-05-25-041803 True False False 64m baremetal 4.8.0-0.nightly-2021-05-25-041803 True False False 2d cloud-credential 4.8.0-0.nightly-2021-05-25-041803 True False False 2d cluster-autoscaler 4.8.0-0.nightly-2021-05-25-041803 True False False 2d config-operator 4.8.0-0.nightly-2021-05-25-041803 True False False 2d console 4.8.0-0.nightly-2021-05-25-041803 True False False 2d csi-snapshot-controller 4.8.0-0.nightly-2021-05-25-041803 True False False 2d dns 4.8.0-0.nightly-2021-05-25-041803 True False False 2d etcd 4.8.0-0.nightly-2021-05-25-041803 True False False 2d image-registry 4.8.0-0.nightly-2021-05-25-041803 True False False 2d ingress 4.8.0-0.nightly-2021-05-25-041803 True False False 2d insights 4.8.0-0.nightly-2021-05-25-041803 True False False 2d kube-apiserver 4.8.0-0.nightly-2021-05-25-041803 True False False 2d kube-controller-manager 4.8.0-0.nightly-2021-05-25-041803 True False False 2d kube-scheduler 4.8.0-0.nightly-2021-05-25-041803 True False False 2d kube-storage-version-migrator 4.8.0-0.nightly-2021-05-25-041803 True False False 2d machine-api 4.8.0-0.nightly-2021-05-25-041803 True False False 2d machine-approver 4.8.0-0.nightly-2021-05-25-041803 True False False 2d machine-config 4.8.0-0.nightly-2021-05-25-041803 True False False 2d marketplace 4.8.0-0.nightly-2021-05-25-041803 True False False 2d monitoring 4.8.0-0.nightly-2021-05-25-041803 True False False 10h network 4.8.0-0.nightly-2021-05-25-041803 True False False 2d node-tuning 4.8.0-0.nightly-2021-05-25-041803 True False False 2d openshift-apiserver 4.8.0-0.nightly-2021-05-25-041803 True False False 2d openshift-controller-manager 4.8.0-0.nightly-2021-05-25-041803 True False False 24h openshift-samples 4.8.0-0.nightly-2021-05-25-041803 True False False 2d operator-lifecycle-manager 4.8.0-0.nightly-2021-05-25-041803 True False False 2d operator-lifecycle-manager-catalog 4.8.0-0.nightly-2021-05-25-041803 True False False 2d operator-lifecycle-manager-packageserver 4.8.0-0.nightly-2021-05-25-041803 True False False 2d service-ca 4.8.0-0.nightly-2021-05-25-041803 True False False 2d storage 4.8.0-0.nightly-2021-05-25-041803 True False False 28h As it's OSP 16 with OVN both ovn and amphora providers are enabled (but Kuryr will request the ovn driver): $ openstack loadbalancer provider list +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ The default kubernetes service LB is created with the ovn provider: $ openstack loadbalancer list | grep kubernetes +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ | 78758cc3-0c44-443d-a2f1-8969330ef262 | default/kubernetes | 61014a8af2ed4a7e86269fe991821a55 | 172.30.0.1 | ACTIVE | ovn | +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ There are not amphora type load balancers: $ openstack loadbalancer list | grep amphora $ $ openstack loadbalancer pool list --loadbalancer 78758cc3-0c44-443d-a2f1-8969330ef262 +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | id | name | project_id | provisioning_status | protocol | lb_algorithm | admin_state_up | +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | 3783b3df-ab61-474e-9ccf-b6436a1863e9 | default/kubernetes:TCP:443 | 61014a8af2ed4a7e86269fe991821a55 | ACTIVE | TCP | SOURCE_IP_PORT | True | +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ $ openstack loadbalancer member list 3783b3df-ab61-474e-9ccf-b6436a1863e9 +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | 1abab2e0-b37b-4184-bfe2-38dfac0b5650 | default/kubernetes:6443 | 61014a8af2ed4a7e86269fe991821a55 | ACTIVE | 10.196.3.84 | 6443 | NO_MONITOR | 1 | | a1cca998-4a0d-4fed-ab68-74746b6c2956 | default/kubernetes:6443 | 61014a8af2ed4a7e86269fe991821a55 | ACTIVE | 10.196.1.247 | 6443 | NO_MONITOR | 1 | | 374c36f7-5d4c-496b-b934-77f622075f7f | default/kubernetes:6443 | 61014a8af2ed4a7e86269fe991821a55 | ACTIVE | 10.196.2.22 | 6443 | NO_MONITOR | 1 | +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ $ oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ostest-8lm5k-master-0 Ready master 2d v1.21.0-rc.0+ee60d07 10.196.1.247 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-8lm5k-master-1 Ready master 2d v1.21.0-rc.0+ee60d07 10.196.3.84 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-8lm5k-master-2 Ready master 2d v1.21.0-rc.0+ee60d07 10.196.2.22 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-8lm5k-worker-0-9lzzw Ready worker 2d v1.21.0-rc.0+ee60d07 10.196.0.113 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-8lm5k-worker-0-f5wrt Ready worker 2d v1.21.0-rc.0+ee60d07 10.196.1.236 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 ostest-8lm5k-worker-0-p8kdk Ready worker 2d v1.21.0-rc.0+ee60d07 10.196.0.195 <none> Red Hat Enterprise Linux CoreOS 48.84.202105242318-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-98.rhaos4.8.git1f3c5cb.el8 $ openstack loadbalancer listener list | grep kubernetes +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ | id | default_pool_id | name | project_id | protocol | protocol_port | admin_state_up | +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ | 6f9fb37e-d50c-43e3-85ff-821aa69bc161 | 3783b3df-ab61-474e-9ccf-b6436a1863e9 | default/kubernetes:TCP:443 | 61014a8af2ed4a7e86269fe991821a55 | TCP | 443 | True | +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ $ openstack loadbalancer listener show 6f9fb37e-d50c-43e3-85ff-821aa69bc161 +-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2021-05-25T09:16:53 | | default_pool_id | 3783b3df-ab61-474e-9ccf-b6436a1863e9 | | default_tls_container_ref | None | | description | | | id | 6f9fb37e-d50c-43e3-85ff-821aa69bc161 | | insert_headers | None | | l7policies | | | loadbalancers | 78758cc3-0c44-443d-a2f1-8969330ef262 | | name | default/kubernetes:TCP:443 | | operating_status | ONLINE | | project_id | 61014a8af2ed4a7e86269fe991821a55 | | protocol | TCP | | protocol_port | 443 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 600000 | | timeout_member_connect | 5000 | | timeout_member_data | 600000 | | timeout_tcp_inspect | 0 | | updated_at | 2021-05-27T09:33:15 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+ The console UI in https://console-openshift-console.apps.ostest.shiftstack.com/ works fine. ************************************************************** 3. Upgrade from 4.7 to 4.8 when only Amphora driver is enabled ************************************************************** Installed OCP 4.7.0-0.nightly-2021-05-27-065913 on top of OSP 13.0.15 (2021-03-24.1) with OVS. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-05-27-065913 True False 104m Cluster version is 4.7.0-0.nightly-2021-05-27-065913 As it's OSP 13 with OVS only the Amphora provider is enabled: $ openstack loadbalancer provider list +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | +---------+-------------------------------------------------+ The Kuryr API LB is created with the amphora provider (as all the rest of LBs): $ openstack loadbalancer list +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ | 8b861925-976d-45af-a51a-3f5d5c1fe6e3 | ostest-7fltp-kuryr-api-loadbalancer | 75990be152764ca1ad084b79d9f2dd6e | 172.30.0.1 | ACTIVE | amphora | +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ ... +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ Upgrade to 4.8 $ oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2021-05-31-085539 --allow-explicit-upgrade --force=true warning: Using by-tag pull specs is dangerous, and while we still allow it in combination with --force for backward compatibility, it would be much safer to pass a by-digest pull spec instead warning: The requested upgrade image is not one of the available updates. You have used --allow-explicit-upgrade to the update to proceed anyway warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures. Updating to release image registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2021-05-31-085539 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-05-31-085539 True False 8m25s Cluster version is 4.8.0-0.nightly-2021-05-31-085539 The kuryr-api-loadbalancer amphora is re-created (new one is named default/kubernetes): +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ | 299adc86-f945-4541-9b1b-d798d2d0158a | default/kubernetes | 75990be152764ca1ad084b79d9f2dd6e | 172.30.0.1 | ACTIVE | amphora | +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ ... +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ $ openstack loadbalancer pool list --loadbalancer 299adc86-f945-4541-9b1b-d798d2d0158a +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | id | name | project_id | provisioning_status | protocol | lb_algorithm | admin_state_up | +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | c8a52e92-0930-475d-add4-52a358416766 | default/kubernetes:TCP:443 | 75990be152764ca1ad084b79d9f2dd6e | ACTIVE | TCP | ROUND_ROBIN | True | +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ $ openstack loadbalancer member list c8a52e92-0930-475d-add4-52a358416766 +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | f090e1b2-96fe-42b0-b8aa-c7caa7a27264 | default/kubernetes:6443 | 75990be152764ca1ad084b79d9f2dd6e | ACTIVE | 10.196.1.20 | 6443 | ONLINE | 1 | | 46a7a5cc-c397-4e90-baa4-4be5520c20bb | default/kubernetes:6443 | 75990be152764ca1ad084b79d9f2dd6e | ACTIVE | 10.196.2.199 | 6443 | ONLINE | 1 | | aad4597e-1566-490a-9ae0-639cf2018865 | default/kubernetes:6443 | 75990be152764ca1ad084b79d9f2dd6e | ACTIVE | 10.196.3.186 | 6443 | ONLINE | 1 | +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ $ oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ostest-7fltp-master-0 Ready master 14h v1.21.0-rc.0+4b2b6ff 10.196.3.186 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 ostest-7fltp-master-1 Ready master 14h v1.21.0-rc.0+4b2b6ff 10.196.1.20 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 ostest-7fltp-master-2 Ready master 14h v1.21.0-rc.0+4b2b6ff 10.196.2.199 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 ostest-7fltp-worker-0-srtkv Ready worker 14h v1.21.0-rc.0+4b2b6ff 10.196.2.210 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 ostest-7fltp-worker-0-tfpgq Ready worker 14h v1.21.0-rc.0+4b2b6ff 10.196.1.148 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 There is a worker node in ERROR status due to an unrelated error in OpenStack. $ openstack loadbalancer listener list | grep kubernetes +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ | id | default_pool_id | name | project_id | protocol | protocol_port | admin_state_up | +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ | 19de4a61-8a70-4296-9cd1-44c7f68f3638 | c8a52e92-0930-475d-add4-52a358416766 | default/kubernetes:TCP:443 | 75990be152764ca1ad084b79d9f2dd6e | TCP | 443 | True | +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ $ openstack loadbalancer listener show 19de4a61-8a70-4296-9cd1-44c7f68f3638 +-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2021-05-31T20:44:40 | | default_pool_id | c8a52e92-0930-475d-add4-52a358416766 | | default_tls_container_ref | None | | description | | | id | 19de4a61-8a70-4296-9cd1-44c7f68f3638 | | insert_headers | None | | l7policies | | | loadbalancers | 299adc86-f945-4541-9b1b-d798d2d0158a | | name | default/kubernetes:TCP:443 | | operating_status | ONLINE | | project_id | 75990be152764ca1ad084b79d9f2dd6e | | protocol | TCP | | protocol_port | 443 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 600000 | | timeout_member_connect | 5000 | | timeout_member_data | 600000 | | timeout_tcp_inspect | 0 | | updated_at | 2021-05-31T20:45:10 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+ The console UI in https://console-openshift-console.apps.ostest.shiftstack.com/ works fine. ***************************************************** 4. Upgrade from 4.7 to 4.8 when OVN driver is enabled ***************************************************** Installed OCP 4.7.0-0.nightly-2021-05-27-075039 on top of OSP 16.1.5 (RHOS-16.1-RHEL-8-20210323.n.0) with OVN. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-05-27-075039 True False 7m38s Cluster version is 4.7.0-0.nightly-2021-05-27-075039 As it's OSP 16 with OVN both ovn and amphora providers are enabled: $ openstack loadbalancer provider list +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ The Kuryr API LB is created by CNO with the amphora provider (however all the rest of LBs are created with OVN provider): $ openstack loadbalancer list +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ | c1e9408e-78ad-4e40-b98d-303b646a0eab | ostest-8bfq8-kuryr-api-loadbalancer | 61014a8af2ed4a7e86269fe991821a55 | 172.30.0.1 | ACTIVE | amphora | +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ ... +--------------------------------------+-------------------------------------+----------------------------------+-------------+---------------------+----------+ Upgrade to 4.8 $ oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2021-05-31-085539 --allow-explicit-upgrade --force=true warning: Using by-tag pull specs is dangerous, and while we still allow it in combination with --force for backward compatibility, it would be much safer to pass a by-digest pull spec instead warning: The requested upgrade image is not one of the available updates. You have used --allow-explicit-upgrade to the update to proceed anyway warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures. Updating to release image registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2021-05-31-085539 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-05-31-085539 True False 9h Cluster version is 4.8.0-0.nightly-2021-05-31-085539 Operators are ok: $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.0-0.nightly-2021-05-31-085539 True False False 7h16m baremetal 4.8.0-0.nightly-2021-05-31-085539 True False False 11h cloud-credential 4.8.0-0.nightly-2021-05-31-085539 True False False 11h cluster-autoscaler 4.8.0-0.nightly-2021-05-31-085539 True False False 11h config-operator 4.8.0-0.nightly-2021-05-31-085539 True False False 11h console 4.8.0-0.nightly-2021-05-31-085539 True False False 9h csi-snapshot-controller 4.8.0-0.nightly-2021-05-31-085539 True False False 9h dns 4.8.0-0.nightly-2021-05-31-085539 True False False 10h etcd 4.8.0-0.nightly-2021-05-31-085539 True False False 11h image-registry 4.8.0-0.nightly-2021-05-31-085539 True False False 11h ingress 4.8.0-0.nightly-2021-05-31-085539 True False False 10h insights 4.8.0-0.nightly-2021-05-31-085539 True False False 11h kube-apiserver 4.8.0-0.nightly-2021-05-31-085539 True False False 11h kube-controller-manager 4.8.0-0.nightly-2021-05-31-085539 True False False 11h kube-scheduler 4.8.0-0.nightly-2021-05-31-085539 True False False 11h kube-storage-version-migrator 4.8.0-0.nightly-2021-05-31-085539 True False False 41m machine-api 4.8.0-0.nightly-2021-05-31-085539 True False False 11h machine-approver 4.8.0-0.nightly-2021-05-31-085539 True False False 11h machine-config 4.8.0-0.nightly-2021-05-31-085539 True False False 9h marketplace 4.8.0-0.nightly-2021-05-31-085539 True False False 11h monitoring 4.8.0-0.nightly-2021-05-31-085539 True False False 7h24m network 4.8.0-0.nightly-2021-05-31-085539 True False False 11h node-tuning 4.8.0-0.nightly-2021-05-31-085539 True False False 10h openshift-apiserver 4.8.0-0.nightly-2021-05-31-085539 True False False 9h openshift-controller-manager 4.8.0-0.nightly-2021-05-31-085539 True False False 10h openshift-samples 4.8.0-0.nightly-2021-05-31-085539 True False False 10h operator-lifecycle-manager 4.8.0-0.nightly-2021-05-31-085539 True False False 11h operator-lifecycle-manager-catalog 4.8.0-0.nightly-2021-05-31-085539 True False False 11h operator-lifecycle-manager-packageserver 4.8.0-0.nightly-2021-05-31-085539 True False False 10h service-ca 4.8.0-0.nightly-2021-05-31-085539 True False False 11h storage 4.8.0-0.nightly-2021-05-31-085539 True False False 9h The kuryr-api-loadbalancer amphora has been removed: $ openstack loadbalancer list | grep amphora $ And a new default kubernetes service LB has been created with the ovn provider: $ openstack loadbalancer list | grep kubernetes +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ | aa722cbd-6fd4-44fd-9f4f-104857c4a4de | default/kubernetes | 61014a8af2ed4a7e86269fe991821a55 | 172.30.0.1 | ACTIVE | ovn | +--------------------------------------+---------------------------+----------------------------------+----------------+---------------------+----------+ $ openstack loadbalancer pool list --loadbalancer aa722cbd-6fd4-44fd-9f4f-104857c4a4de2 +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | id | name | project_id | provisioning_status | protocol | lb_algorithm | admin_state_up | +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | ffef7b14-2d1e-4da9-b9a4-c69a960f5e3c | default/kubernetes:TCP:443 | 61014a8af2ed4a7e86269fe991821a55 | ACTIVE | TCP | SOURCE_IP_PORT | True | +--------------------------------------+----------------------------+----------------------------------+---------------------+----------+----------------+----------------+ $ openstack loadbalancer member list ffef7b14-2d1e-4da9-b9a4-c69a960f5e3c +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | 055a92ab-3120-41b7-8fd9-73ca2e6e0efd | default/kubernetes:6443 | 61014a8af2ed4a7e86269fe991821a55 | ACTIVE | 10.196.1.80 | 6443 | NO_MONITOR | 1 | | 0e787843-3969-4484-ae3e-54cc85177ebc | default/kubernetes:6443 | 61014a8af2ed4a7e86269fe991821a55 | ACTIVE | 10.196.0.103 | 6443 | NO_MONITOR | 1 | | 95e3a865-a080-41b5-9172-4c2220633f03 | default/kubernetes:6443 | 61014a8af2ed4a7e86269fe991821a55 | ACTIVE | 10.196.1.91 | 6443 | NO_MONITOR | 1 | +--------------------------------------+-------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ $ oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ostest-8bfq8-master-0 Ready master 11h v1.21.0-rc.0+4b2b6ff 10.196.1.80 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 ostest-8bfq8-master-1 Ready master 11h v1.21.0-rc.0+4b2b6ff 10.196.0.103 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 ostest-8bfq8-master-2 Ready master 11h v1.21.0-rc.0+4b2b6ff 10.196.1.91 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 ostest-8bfq8-worker-0-95pc9 Ready worker 11h v1.21.0-rc.0+4b2b6ff 10.196.0.60 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 ostest-8bfq8-worker-0-9pxtp Ready worker 11h v1.21.0-rc.0+4b2b6ff 10.196.2.251 <none> Red Hat Enterprise Linux CoreOS 48.84.202105281935-0 (Ootpa) 4.18.0-305.el8.x86_64 cri-o://1.21.0-100.rhaos4.8.git3dfc2a1.el8 There is a worker node in ERROR status due to an unrelated error in OpenStack ('No valid host was found.'). $ openstack loadbalancer listener list | grep kubernetes +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ | id | default_pool_id | name | project_id | protocol | protocol_port | admin_state_up | +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ | 474f29ab-f57c-4c5b-a3e5-2a633212a2d4 | 3783b3df-ab61-474e-9ccf-b6436a1863e9 | default/kubernetes:TCP:443 | 61014a8af2ed4a7e86269fe991821a55 | TCP | 443 | True | +--------------------------------------+--------------------------------------+---------------------------------+----------------------------------+----------+---------------+----------------+ $ openstack loadbalancer listener show 474f29ab-f57c-4c5b-a3e5-2a633212a2d4 +-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2021-05-31T21:11:54 | | default_pool_id | ffef7b14-2d1e-4da9-b9a4-c69a960f5e3c | | default_tls_container_ref | None | | description | | | id | 474f29ab-f57c-4c5b-a3e5-2a633212a2d4 | | insert_headers | None | | l7policies | | | loadbalancers | aa722cbd-6fd4-44fd-9f4f-104857c4a4de | | name | default/kubernetes:TCP:443 | | operating_status | ONLINE | | project_id | 61014a8af2ed4a7e86269fe991821a55 | | protocol | TCP | | protocol_port | 443 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 600000 | | timeout_member_connect | 5000 | | timeout_member_data | 600000 | | timeout_tcp_inspect | 0 | | updated_at | 2021-06-01T02:10:53 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+ The console UI in https://console-openshift-console.apps.ostest.shiftstack.com/ works fine.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438