Description of problem: Assume there is an existing OpenShift cluster set up with Kuryr that's running with amphora Octavia provider configured. If the underlying OpenStack cloud gets reconfigured to add OVN Octavia provider, the CNO will reconfigure Kuryr to use it. This will cause operations to the existing Amphora-based loadbalancers to be done wrongly. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Deploy cluster with Kuryr on OpenStack cloud with Octavia having only Amphora provider. 2. Add OVN provider to the cloud. 3. See Kuryr getting reconfigured to use it. Actual results: Kuryr gets reconfigured. Expected results: Kuryr should not get reconfigured until manually requested by the cluster deployer. Additional info:
Verified in 4.4.0-0.nightly-2020-03-19-135403 on top of OSP 16 RHOS_TRUNK-16.0-RHEL-8-20200226.n.1 compose. Deployed OSP 16, and disabled ovn octavia provider: $ openstack loadbalancer provider list +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | +---------+-------------------------------------------------+ Deployed OCP 4.4: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.0-0.nightly-2020-03-19-135403 True False 9m11s Cluster version is 4.4.0-0.nightly-2020-03-19-135403 All the loadbalancers are created with amphora driver (chequed with `openstack loadbalancer list` provider column). Annotations in kuryr config map: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default Network operator pod log: 2020/03/20 11:59:12 Detected that Kuryr was already configured to use default LB provider. Making sure to keep it that way. Added ovn provider in octavia.conf: enabled_provider_drivers=amphora: The Octavia Amphora driver.,octavia: Deprecated alias of the Octavia Amphora driver.,ovn: Octavia OVN driver. $ openstack loadbalancer provider list +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ 2020/03/20 12:41:22 Detected that Kuryr was already configured to use default LB provider. Making sure to keep it that way. Created a service: $ oc new-project test $ oc run --image kuryr/demo demo $ oc get pods NAME READY STATUS RESTARTS AGE demo-1-deploy 0/1 Completed 0 34s demo-1-mfrp4 1/1 Running 0 13s $ oc expose dc/demo --port 80 --target-port 8080 $ openstack loadbalancer list +--------------------------------------+-----------+----------------------------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+-----------+----------------------------------+----------------+---------------------+----------+ ... | 6b149507-088c-4cc8-930f-d666c392e64e | test/demo | 5374ee1858ca42c68843a764cc521c8c | 172.30.82.182 | ACTIVE | amphora | +--------------------------------------+-----------+----------------------------------+----------------+---------------------+----------+ $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo ClusterIP 172.30.82.182 <none> 80/TCP 12m $ oc run --image kuryr/demo caller $ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES caller-1-6w44b 1/1 Running 0 2m45s 10.128.118.237 ostest-cdsnz-worker-ls8gn <none> <none> caller-1-deploy 0/1 Completed 0 2m48s 10.128.119.215 ostest-cdsnz-worker-ls8gn <none> <none> demo-1-deploy 0/1 Completed 0 14m 10.128.119.24 ostest-cdsnz-worker-ls8gn <none> <none> demo-1-mfrp4 1/1 Running 0 13m 10.128.118.84 ostest-cdsnz-worker-ls8gn <none> <none> $ oc rsh caller-1-6w44b curl 172.30.82.182 demo-1-mfrp4: HELLO! I AM ALIVE!!!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581