Bug 1883832 - LB should be recreated if manually deleted
Summary: LB should be recreated if manually deleted
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.0
Assignee: Luis Tomas Bolivar
QA Contact: GenadiC
URL:
Whiteboard:
Depends On: 1883166
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-30 11:07 UTC by Maysa Macedo
Modified: 2020-10-27 16:47 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:46:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 356 0 None closed Bug 1883832: Ensure klb handler reacretes lb 2020-10-07 08:50:06 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:47:09 UTC

Description Maysa Macedo 2020-09-30 11:07:22 UTC
Description of problem:


If a LB was manually deleted and the corresponding Service/Endpoints got updated, Kuryr should handle the creation of a new load balancer with the new information.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Use Octavia amphora provider
2. Manually delete a lb pointed by a service
3. Update the svc/endpoint

Actual results:


Expected results:


Additional info:

Comment 4 Jon Uriarte 2020-10-02 10:38:38 UTC
Verified in:
4.6.0-0.nightly-2020-10-02-001427
OSP 13 2020-09-16.1

Create a SVC/LB:
---------------
$ oc new-project test
$ oc run --image=kuryr/demo demo
$ oc expose pod/demo --port 80 --target-port 8080
$ oc get pods,svc
NAME       READY   STATUS    RESTARTS   AGE
pod/demo   1/1     Running   0          6m57s

NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/demo   ClusterIP   172.30.143.236   <none>        80/TCP    2m46s

$ openstack loadbalancer list
+--------------------------------------+-----------+----------------+---------------------+----------+
| id                                   | name      | vip_address    | provisioning_status | provider |
+--------------------------------------+-----------+----------------+---------------------+----------+
| 362fd371-0929-4483-a6d8-0dab385af088 | test/demo | 172.30.143.236 | ACTIVE              | amphora  |
+--------------------------------------+-----------+----------------+---------------------+----------+


Remove the LB manually:
----------------------
$ openstack loadbalancer delete test/demo --cascade

The test/demo LB was removed.


Edit the svc:
------------
$ oc edit svc demo (changed the port to 81)

spec:
  clusterIP: 172.30.143.236
  ports:
  - port: 81
    protocol: TCP
    targetPort: 8080
  selector:
    run: demo
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

$ oc get svc
NAME   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
demo   ClusterIP   172.30.143.236   <none>        81/TCP    19m


Check new LB creation:
---------------------
$ openstack loadbalancer list
+--------------------------------------+-----------+----------------+---------------------+----------+
| id                                   | name      | vip_address    | provisioning_status | provider |
+--------------------------------------+-----------+----------------+---------------------+----------+
| cbcef9ef-9df5-43ed-944f-b2c55d3fefb6 | test/demo | 172.30.143.236 | ACTIVE              | amphora  |
+--------------------------------------+-----------+----------------+---------------------+----------+

$ oc run --image=kuryr/demo caller

$ oc get pods,svc -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP               NODE                          NOMINATED NODE   READINESS GATES
pod/caller   1/1     Running   0          77s   10.128.116.248   ostest-mmc4l-worker-0-cc4bf   <none>           <none>
pod/demo     1/1     Running   0          27m   10.128.116.59    ostest-mmc4l-worker-0-mfjkl   <none>           <none>

NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/demo   ClusterIP   172.30.143.236   <none>        81/TCP    23m   run=demo

$ oc rsh pod/caller curl 172.30.143.236:80
curl: (7) Failed to connect to 172.30.143.236 port 80: Operation timed out 
command terminated with exit code 7

$ oc rsh pod/caller curl 172.30.143.236:81
demo: HELLO! I AM ALIVE!!!

Comment 7 errata-xmlrpc 2020-10-27 16:46:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.