Bug 1844093 - LBaaSLoadBalancer object has wrong default value for security_groups
Summary: LBaaSLoadBalancer object has wrong default value for security_groups
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.4.z
Assignee: Maysa Macedo
QA Contact: GenadiC
URL:
Whiteboard:
Depends On: 1843784
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-04 15:34 UTC by OpenShift BugZilla Robot
Modified: 2020-06-23 00:58 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-23 00:57:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 265 0 None closed [release-4.4] [release-4.5] Bug 1844093: Ensure security_groups on LBaaSLoadBalancer defaults to empty list 2020-06-22 14:57:14 UTC
Red Hat Product Errata RHBA-2020:2580 0 None None None 2020-06-23 00:58:09 UTC

Description OpenShift BugZilla Robot 2020-06-04 15:34:48 UTC
+++ This bug was initially created as a clone of Bug #1843784 +++

+++ This bug was initially created as a clone of Bug #1843674 +++

Description of problem:

When no security groups is present on the LBaaSLoadBalancer oslo
versioned object it should default to an empty list and not to
None. Otherwise iterations of the security_groups field fails.

2020-05-29 10:10:02.892 1 ERROR kuryr_kubernetes.controller.drivers.lbaasv2 if sg.id in loadbalancer.security_groups:
2020-05-29 10:10:02.892 1 ERROR kuryr_kubernetes.controller.drivers.lbaasv2 TypeError: argument of type 'NoneType' is not iterable.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 rlobillo 2020-06-16 10:49:11 UTC
Verified on:
- OCP4.4.0-0.nightly-2020-06-14-142924 on OSP16 (RHOS_TRUNK-16.0-RHEL-8-20200513.n.1) with OVN.
- OCP4.4.0-0.nightly-2020-06-14-142924 on OSP13 (2020-06-09.2) + OVS.


######################
OCP4.4.0-0.nightly-2020-06-14-142924 on OSP16 (RHOS_TRUNK-16.0-RHEL-8-20200513.n.1) with OVN - verification:

1- Set up environment:

$ oc get all
NAME                       READY   STATUS      RESTARTS   AGE
pod/demo-1-4bscw           1/1     Running     0          68s
pod/demo-1-deploy          0/1     Completed   0          99s
pod/demo-caller-1-2z2ww    1/1     Running     0          77s
pod/demo-caller-1-deploy   0/1     Completed   0          99s

NAME                                  DESIRED   CURRENT   READY   AGE
replicationcontroller/demo-1          1         1         1       100s
replicationcontroller/demo-caller-1   1         1         1       100s

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/demo-1-4bscw   ClusterIP   172.30.177.222   <none>        80/TCP    36s

NAME                                             REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/demo          1          1         1         config
deploymentconfig.apps.openshift.io/demo-caller   1          1         1         config


$ openstack loadbalancer list | grep demo
| 0727c5be-fe02-4976-a620-fc0e4916857d | test/demo-1-4bscw                                                           | 1022ce3801a445df869b61b032d08925 | 172.30.177.222 | ACTIVE              | ovn      |

(overcloud) [stack@undercloud-0 ~]$ oc get pods -n openshift-kuryr
NAME                                   READY   STATUS    RESTARTS   AGE
kuryr-cni-9fldw                        1/1     Running   0          158m
kuryr-cni-bftbd                        1/1     Running   0          158m
kuryr-cni-dlwfk                        1/1     Running   0          158m
kuryr-cni-gvz86                        1/1     Running   0          158m
kuryr-cni-kwxfm                        1/1     Running   0          158m
kuryr-cni-n42tz                        1/1     Running   0          158m
kuryr-controller-659564446b-x7899      1/1     Running   0          158m
kuryr-dns-admission-controller-4wjjs   1/1     Running   0          157m
kuryr-dns-admission-controller-gbjcs   1/1     Running   0          157m
kuryr-dns-admission-controller-r6894   1/1     Running   0          157m

2 - Forcing the controller to cleanup a leftover loadbalancer:

# Delete svc while kuryr-controller is being recreated so the LB remained as leftover for new kuryr-controller.-.

$ date && oc delete pod -n openshift-kuryr $(oc get pods -n openshift-kuryr -o jsonpath='{.items[6].metadata.name}') &
[1] 578985
$ Tue Jun 16 10:04:10 UTC 2020
pod "kuryr-controller-659564446b-x7899" deleted
$ oc delete service/demo-1-4bscw && openstack loadbalancer list | grep demo
service "demo-1-4bscw" deleted
[1]+  Done                    date && oc delete pod -n openshift-kuryr $(oc get pods -n openshift-kuryr -o jsonpath='{.items[6].metadata.name}')
| 0727c5be-fe02-4976-a620-fc0e4916857d | test/demo-1-4bscw                                                           | 1022ce3801a445df869b61b032d08925 | 172.30.177.222 | ACTIVE              | ovn      |

# Leftover LB deleted:

$ openstack loadbalancer list | grep demo
$

# No restarts observed:

(overcloud) [stack@undercloud-0 ~]$ oc get pods -n openshift-kuryr
NAME                                   READY   STATUS    RESTARTS   AGE
kuryr-cni-9fldw                        1/1     Running   0          160m
kuryr-cni-bftbd                        1/1     Running   0          160m
kuryr-cni-dlwfk                        1/1     Running   0          160m
kuryr-cni-gvz86                        1/1     Running   0          160m
kuryr-cni-kwxfm                        1/1     Running   0          160m
kuryr-cni-n42tz                        1/1     Running   0          160m
kuryr-controller-659564446b-wngnw      0/1     Running   0          25s
kuryr-dns-admission-controller-4wjjs   1/1     Running   0          160m
kuryr-dns-admission-controller-gbjcs   1/1     Running   0          160m
kuryr-dns-admission-controller-r6894   1/1     Running   0          160m

# No errors observed: 

$ oc logs -n openshift-kuryr $(oc get pods -n openshift-kuryr -o jsonpath='{.items[6].metadata.name}') | grep ERROR
$

3 - Recreate svc to confirm stability:

$ oc expose pod/demo-1-4bscw --port 80 --target-port 8080
$ oc rsh pod/demo-caller-1-2z2ww curl 172.30.91.218
demo-1-4bscw: HELLO! I AM ALIVE!!!
$ oc logs -n openshift-kuryr kuryr-controller-659564446b-wngnw | grep ERROR
$ oc get pods -n openshift-kuryr
NAME                                   READY   STATUS    RESTARTS   AGE
kuryr-cni-9fldw                        1/1     Running   0          164m
kuryr-cni-bftbd                        1/1     Running   0          164m
kuryr-cni-dlwfk                        1/1     Running   0          164m
kuryr-cni-gvz86                        1/1     Running   0          164m
kuryr-cni-kwxfm                        1/1     Running   0          164m
kuryr-cni-n42tz                        1/1     Running   0          164m
kuryr-controller-659564446b-wngnw      1/1     Running   0          4m19s
kuryr-dns-admission-controller-4wjjs   1/1     Running   0          164m
kuryr-dns-admission-controller-gbjcs   1/1     Running   0          164m
kuryr-dns-admission-controller-r6894   1/1     Running   0          163m

#####################
OCP4.4.0-0.nightly-2020-06-14-142924 on OSP13 (2020-06-09.2) + OVS verification:

1- Set up environment:

$ oc new-project test && oc run --image kuryr/demo demo && oc run --image kuryr/demo demo-caller
$ oc expose pod/demo-1-dct27 --port 80 --target-port 8080
$ oc get all
NAME                       READY   STATUS      RESTARTS   AGE
pod/demo-1-dct27           1/1     Running     0          55s
pod/demo-1-deploy          0/1     Completed   0          79s
pod/demo-caller-1-deploy   0/1     Completed   0          79s
pod/demo-caller-1-h9bv9    1/1     Running     0          52s

NAME                                  DESIRED   CURRENT   READY   AGE
replicationcontroller/demo-1          1         1         1       80s
replicationcontroller/demo-caller-1   1         1         1       80s

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/demo-1-dct27   ClusterIP   172.30.148.220   <none>        80/TCP    4s

NAME                                             REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/demo          1          1         1         config
deploymentconfig.apps.openshift.io/demo-caller   1          1         1         config

$ oc rsh pod/demo-caller-1-h9bv9 curl 172.30.148.220
demo-1-dct27: HELLO! I AM ALIVE!!!

$ openstack loadbalancer list | grep demo
| 10726035-1eb7-48dd-9b82-11a4b12e5320 | test/demo-1-dct27                                                           | 14d5b40c78f04b689eed1f43bcc163d5 | 172.30.148.220 | ACTIVE              | octavia  |


2 - Forcing the controller to cleanup a leftover loadbalancer:

# Delete svc while kuryr-controller is being recreated so the LB remained as leftover for new kuryr-controller.

$ date && oc delete pod -n openshift-kuryr $(oc get pods -n openshift-kuryr -l app=kuryr-controller -o jsonpath='{.items[0].metadata.name}') &
[1] 24514
(overcloud) [stack@undercloud-0 ~]$ Tue Jun 16 06:35:53 EDT 2020
pod "kuryr-controller-78494d6fdd-796kw" deleted

$ oc delete service/demo-1-dct27 && openstack loadbalancer list | grep demo
service "demo-1-dct27" deleted
| 10726035-1eb7-48dd-9b82-11a4b12e5320 | test/demo-1-dct27                                                           | 14d5b40c78f04b689eed1f43bcc163d5 | 172.30.148.220 | ACTIVE              | octavia  |

# Leftover LB deleted:

$ openstack loadbalancer list | grep demo
$

# Neither errors nor restarts observed:

$ oc logs -n openshift-kuryr $(oc get pods -n openshift-kuryr -l app=kuryr-controller -o jsonpath='{.items[0].metadata.name}') | grep ERROR
$
$ oc get pods -n openshift-kuryr
NAME                                   READY   STATUS    RESTARTS   AGE
kuryr-cni-9jkck                        1/1     Running   0          19m
kuryr-cni-hl5gg                        1/1     Running   0          19m
kuryr-cni-j6lkb                        1/1     Running   0          19m
kuryr-cni-mzv6g                        1/1     Running   0          19m
kuryr-cni-q7n52                        1/1     Running   0          19m
kuryr-cni-z7cqv                        1/1     Running   0          19m
kuryr-controller-78494d6fdd-2rphp      1/1     Running   0          2m8s
kuryr-dns-admission-controller-28scg   1/1     Running   0          19m
kuryr-dns-admission-controller-bd97f   1/1     Running   0          18m
kuryr-dns-admission-controller-mjm4h   1/1     Running   0          19m

3 - Recreate svc to confirm stability:

$ oc expose pod/demo-1-dct27 --port 80 --target-port 8080
service/demo-1-dct27 exposed
$ oc get all
NAME                       READY   STATUS      RESTARTS   AGE
pod/demo-1-dct27           1/1     Running     0          6m38s
pod/demo-1-deploy          0/1     Completed   0          7m2s
pod/demo-caller-1-deploy   0/1     Completed   0          7m2s
pod/demo-caller-1-h9bv9    1/1     Running     0          6m35s

NAME                                  DESIRED   CURRENT   READY   AGE
replicationcontroller/demo-1          1         1         1       7m3s
replicationcontroller/demo-caller-1   1         1         1       7m3s

NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/demo-1-dct27   ClusterIP   172.30.60.71   <none>        80/TCP    28s

NAME                                             REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/demo          1          1         1         config
deploymentconfig.apps.openshift.io/demo-caller   1          1         1         config

$ oc rsh pod/demo-caller-1-h9bv9 curl 172.30.60.71
demo-1-dct27: HELLO! I AM ALIVE!!!
$ oc get pods -n openshift-kuryr
NAME                                   READY   STATUS    RESTARTS   AGE
kuryr-cni-9jkck                        1/1     Running   0          21m
kuryr-cni-hl5gg                        1/1     Running   0          21m
kuryr-cni-j6lkb                        1/1     Running   0          21m
kuryr-cni-mzv6g                        1/1     Running   0          21m
kuryr-cni-q7n52                        1/1     Running   0          21m
kuryr-cni-z7cqv                        1/1     Running   0          21m
kuryr-controller-78494d6fdd-2rphp      1/1     Running   0          4m12s
kuryr-dns-admission-controller-28scg   1/1     Running   0          21m
kuryr-dns-admission-controller-bd97f   1/1     Running   0          21m
kuryr-dns-admission-controller-mjm4h   1/1     Running   0          21m

Comment 5 errata-xmlrpc 2020-06-23 00:57:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2580


Note You need to log in before you can comment on or make changes to this bug.