Bug 1796114 - Kuryr getting reconfigured when underlying cloud gets OVN Octavia provider set up
Summary: Kuryr getting reconfigured when underlying cloud gets OVN Octavia provider se...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.4.0
Assignee: Michał Dulko
QA Contact: GenadiC
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-29 16:04 UTC by Michał Dulko
Modified: 2020-05-04 11:27 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-04 11:27:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 457 0 None closed Bug 1796114: Kuryr: Do not reconfigure if LB providers change 2020-09-26 13:03:33 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:27:45 UTC

Description Michał Dulko 2020-01-29 16:04:36 UTC
Description of problem:
Assume there is an existing OpenShift cluster set up with Kuryr that's running with amphora Octavia provider configured. If the underlying OpenStack cloud gets reconfigured to add OVN Octavia provider, the CNO will reconfigure Kuryr to use it. This will cause operations to the existing Amphora-based loadbalancers to be done wrongly.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Deploy cluster with Kuryr on OpenStack cloud with Octavia having only Amphora provider.
2. Add OVN provider to the cloud.
3. See Kuryr getting reconfigured to use it.

Actual results:
Kuryr gets reconfigured.

Expected results:
Kuryr should not get reconfigured until manually requested by the cluster deployer.

Additional info:

Comment 2 Jon Uriarte 2020-03-20 13:48:55 UTC
Verified in 4.4.0-0.nightly-2020-03-19-135403 on top of OSP 16 RHOS_TRUNK-16.0-RHEL-8-20200226.n.1 compose.

Deployed OSP 16, and disabled ovn octavia provider:

$ openstack loadbalancer provider list
+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
+---------+-------------------------------------------------+

Deployed OCP 4.4:
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.4.0-0.nightly-2020-03-19-135403   True        False         9m11s   Cluster version is 4.4.0-0.nightly-2020-03-19-135403

All the loadbalancers are created with amphora driver (chequed with `openstack loadbalancer list` provider column).

Annotations in kuryr config map:
  annotations:
    networkoperator.openshift.io/kuryr-octavia-provider: default

Network operator pod log:
2020/03/20 11:59:12 Detected that Kuryr was already configured to use default LB provider. Making sure to keep it that way.

Added ovn provider in octavia.conf:
enabled_provider_drivers=amphora: The Octavia Amphora driver.,octavia: Deprecated alias of the Octavia Amphora driver.,ovn: Octavia OVN driver.

$ openstack loadbalancer provider list
+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn     | Octavia OVN driver.                             |
+---------+-------------------------------------------------+

2020/03/20 12:41:22 Detected that Kuryr was already configured to use default LB provider. Making sure to keep it that way.

Created a service:
$ oc new-project test
$ oc run --image kuryr/demo demo
$ oc get pods
NAME            READY   STATUS      RESTARTS   AGE
demo-1-deploy   0/1     Completed   0          34s
demo-1-mfrp4    1/1     Running     0          13s

$ oc expose dc/demo --port 80 --target-port 8080

$ openstack loadbalancer list
+--------------------------------------+-----------+----------------------------------+----------------+---------------------+----------+
| id                                   | name      | project_id                       | vip_address    | provisioning_status | provider |
+--------------------------------------+-----------+----------------------------------+----------------+---------------------+----------+
...
| 6b149507-088c-4cc8-930f-d666c392e64e | test/demo | 5374ee1858ca42c68843a764cc521c8c | 172.30.82.182  | ACTIVE              | amphora  |
+--------------------------------------+-----------+----------------------------------+----------------+---------------------+----------+

$ oc get svc
NAME   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
demo   ClusterIP   172.30.82.182   <none>        80/TCP    12m

$ oc run --image kuryr/demo caller
$ oc get pods -o wide
NAME              READY   STATUS      RESTARTS   AGE     IP               NODE                        NOMINATED NODE   READINESS GATES
caller-1-6w44b    1/1     Running     0          2m45s   10.128.118.237   ostest-cdsnz-worker-ls8gn   <none>           <none>
caller-1-deploy   0/1     Completed   0          2m48s   10.128.119.215   ostest-cdsnz-worker-ls8gn   <none>           <none>
demo-1-deploy     0/1     Completed   0          14m     10.128.119.24    ostest-cdsnz-worker-ls8gn   <none>           <none>
demo-1-mfrp4      1/1     Running     0          13m     10.128.118.84    ostest-cdsnz-worker-ls8gn   <none>           <none>

$ oc rsh caller-1-6w44b curl 172.30.82.182                                                                                                                                      
demo-1-mfrp4: HELLO! I AM ALIVE!!!

Comment 4 errata-xmlrpc 2020-05-04 11:27:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.