Bug 1843674 - LBaaSLoadBalancer object has wrong default value for security_groups
Summary: LBaaSLoadBalancer object has wrong default value for security_groups
Keywords:
Status: VERIFIED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.0
Assignee: Maysa Macedo
QA Contact: GenadiC
URL:
Whiteboard:
Depends On:
Blocks: 1843784
TreeView+ depends on / blocked
 
Reported: 2020-06-03 19:29 UTC by Maysa Macedo
Modified: 2020-08-04 11:38 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 257 None closed Bug 1843674: Ensure security_groups on LBaaSLoadBalancer defaults to empty list 2020-08-03 09:33:37 UTC

Description Maysa Macedo 2020-06-03 19:29:13 UTC
Description of problem:

When no security groups is present on the LBaaSLoadBalancer oslo
versioned object it should default to an empty list and not to
None. Otherwise iterations of the security_groups field fails.

2020-05-29 10:10:02.892 1 ERROR kuryr_kubernetes.controller.drivers.lbaasv2 if sg.id in loadbalancer.security_groups:
2020-05-29 10:10:02.892 1 ERROR kuryr_kubernetes.controller.drivers.lbaasv2 TypeError: argument of type 'NoneType' is not iterable.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 rlobillo 2020-08-04 11:38:15 UTC
Verified on OCP4.6.0-0.nightly-2020-07-25-065959 over OSP16.1 (RHOS-16.1-RHEL-8-20200723.n.0) with ovn-octavia enabled.

NP + Conformance has been run with the expected results.

Forcing the controller to cleanup a leftover loadbalancer do not raise any error and kuryr keeps providing its service normally.

Steps:

#1. Set environment:

(overcloud) [stack@undercloud-0 ~]$ oc get all
NAME              READY   STATUS    RESTARTS   AGE
pod/demo          1/1     Running   0          20m
pod/demo-caller   1/1     Running   0          20m

NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/demo   ClusterIP   172.30.255.221   <none>        80/TCP    5s
(overcloud) [stack@undercloud-0 ~]$ oc rsh pod/demo-caller curl 172.30.255.221
demo: HELLO! I AM ALIVE!!!
(overcloud) [stack@undercloud-0 ~]$ . overcloudrc && openstack loadbalancer show test/demo
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2020-08-04T11:31:04                  |
| description         |                                      |
| flavor_id           | None                                 |
| id                  | af2ed562-046c-4df5-895c-514c7ace0b9f |
| listeners           | 0cdc18b5-53ec-4096-8b56-e157cffedd54 |
| name                | test/demo                            |
| operating_status    | ONLINE                               |
| pools               | 6140584e-fcff-4702-8d61-e4bf56b7b443 |
| project_id          | 7c4025fbae4b439e8e184b2a2c8b8ee8     |
| provider            | ovn                                  |
| provisioning_status | ACTIVE                               |
| updated_at          | 2020-08-04T11:31:24                  |
| vip_address         | 172.30.255.221                       |
| vip_network_id      | 6242669a-434b-4d6b-8313-6147eb187c43 |
| vip_port_id         | 46b6abcd-2e4a-434b-a389-abcb48257934 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 52d0290a-1a17-4620-a548-1d44b1f32274 |
+---------------------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ 

#2. Remove service and restart controller, letting the service ACTIVE:

(overcloud) [stack@undercloud-0 ~]$ oc delete -n openshift-kuryr $(oc get pods -n openshift-kuryr -l app=kuryr-controller -o name) && \
> oc delete service/demo && \
> openstack loadbalancer show test/demo && \
> sleep 10 && openstack loadbalancer show test/demo
pod "kuryr-controller-546bd4db59-4k4kj" deleted
service "demo" deleted
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2020-08-04T11:31:04                  |
| description         |                                      |
| flavor_id           | None                                 |
| id                  | af2ed562-046c-4df5-895c-514c7ace0b9f |
| listeners           | 0cdc18b5-53ec-4096-8b56-e157cffedd54 |
| name                | test/demo                            |
| operating_status    | ONLINE                               |
| pools               | 6140584e-fcff-4702-8d61-e4bf56b7b443 |
| project_id          | 7c4025fbae4b439e8e184b2a2c8b8ee8     |
| provider            | ovn                                  |
| provisioning_status | ACTIVE                               |
| updated_at          | 2020-08-04T11:31:24                  |
| vip_address         | 172.30.255.221                       |
| vip_network_id      | 6242669a-434b-4d6b-8313-6147eb187c43 |
| vip_port_id         | 46b6abcd-2e4a-434b-a389-abcb48257934 |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 52d0290a-1a17-4620-a548-1d44b1f32274 |
+---------------------+--------------------------------------+
Unable to locate test/demo in loadbalancers # After 10 seconds, the lb is deleted as leftover on the new kuryr-controller.

#3. No errors observed
	
(overcloud) [stack@undercloud-0 ~]$ oc logs -n openshift-kuryr $(oc get pods -n openshift-kuryr -l app=kuryr-controller -o name) | grep -i error
(overcloud) [stack@undercloud-0 ~]$

#4. New service can be created and works as expected:

[stack@undercloud-0 ~]$  oc expose pod/demo --port 80 --target-port 8080
service/demo exposed
(overcloud) [stack@undercloud-0 ~]$ oc get all
NAME              READY   STATUS    RESTARTS   AGE
pod/demo          1/1     Running   0          26m
pod/demo-caller   1/1     Running   0          26m

NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/demo   ClusterIP   172.30.230.171   <none>        80/TCP    6s
(overcloud) [stack@undercloud-0 ~]$ oc rsh pod/demo-caller curl 172.30.230.171
demo: HELLO! I AM ALIVE!!!


Note You need to log in before you can comment on or make changes to this bug.