Bug 1853893

Summary: After increasing memory and cpu for compute nodes, Octavia load balancer in PENDING_UPDATE/ERROR, amphora on ERROR.
Product: Red Hat OpenStack Reporter: Bruna Bonguardo <bbonguar>
Component: openstack-octaviaAssignee: Assaf Muller <amuller>
Status: CLOSED DUPLICATE QA Contact: Bruna Bonguardo <bbonguar>
Severity: high Docs Contact:
Priority: unspecified    
Version: 16.1 (Train)CC: cgoncalves, gthiemon, ihrachys, lpeer, majopela, scohen
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-08 14:05:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Bruna Bonguardo 2020-07-05 10:10:15 UTC
Description of problem:
After increasing memory and cpu for compute nodes, Octavia load balancer in PENDING_UPDATE/ERROR, amphora on ERROR.


Version-Release number of selected component (if applicable):
$ cat /var/lib/rhos-release/latest-installed
16.1-trunk  -p RHOS-16.1-RHEL-8-20200625.n.0


How reproducible:
N/A

Steps to Reproduce:
1. Deploy OSPD with 2 compute nodes and 3 controller nodes
2. Create Octavia load balancer (amphora provider, single topology), one UDP listener, one UDP-connect pool with 2 member servers, one UDP health monitor.
3. Increase memory and cpu of the compute nodes by using the virsh commands:

[root@titan89 ~]# virsh list --all
 Id   Name           State
------------------------------
 4    undercloud-0   running
 17   compute-0      running
 18   compute-1      running
 19   controller-0   running
 20   controller-2   running
 21   controller-1   running

[root@titan89 ~]# virsh shutdown compute-0
Domain compute-0 is being shutdown

[root@titan89 ~]# virsh edit compute-0
Domain compute-0 XML configuration edited.

[root@titan89 ~]# virsh create /etc/libvirt/qemu/compute-0.xml
Domain compute-0 created from /etc/libvirt/qemu/compute-0.xml

[root@titan89 ~]# virsh shutdown compute-1
Domain compute-1 is being shutdown

[root@titan89 ~]# virsh edit compute-1
Domain compute-1 XML configuration edited.
 
[root@titan89 ~]# virsh create /etc/libvirt/qemu/compute-1.xml
Domain compute-1 created from /etc/libvirt/qemu/compute-1.xml


Actual results:
Load balancer enters a PENDING_UPDATE provisioning status:
(tester) [stack@undercloud-0 ~]$ openstack loadbalancer status show udp-lb
{
    "loadbalancer": {
        "id": "259d9154-2aa0-4548-9fa9-c80df498926e",
        "name": "udp-lb",
        "operating_status": "ONLINE",
        "provisioning_status": "PENDING_UPDATE",
        "listeners": [
            {
                "id": "5009fb45-c0e9-41f6-b468-889b97fbf79e",
                "name": "http-listener",
                "operating_status": "ONLINE",
                "provisioning_status": "ACTIVE",
                "pools": []
            },
            {
                "id": "66e7b6d4-b2c3-46d2-88b3-860d747fcffe",
                "name": "udp-listener",
                "operating_status": "ONLINE",
                "provisioning_status": "ACTIVE",
                "pools": [
                    {
                        "id": "df8820b0-c641-41c6-914c-e69826ed7934",
                        "name": "udp-pool",
                        "provisioning_status": "ACTIVE",
                        "operating_status": "ONLINE",
                        "health_monitor": {
                            "id": "eb9fe4f1-afb2-4b10-880f-e130e624223f",
                            "name": "",
                            "type": "UDP-CONNECT",
                            "provisioning_status": "ACTIVE",
                            "operating_status": "ONLINE"
                        },
                        "members": [
                            {
                                "id": "7e31c725-6c2d-4a89-97ac-870715d8a54d",
                                "name": "dns-member-1",
                                "operating_status": "ONLINE",
                                "provisioning_status": "ACTIVE",
                                "address": "10.0.1.194",
                                "protocol_port": 12345
                            },
                            {
                                "id": "71be69eb-86bd-4bc7-a70b-1d06e4dc7855",
                                "name": "dns-member-2",
                                "operating_status": "ONLINE",
                                "provisioning_status": "ACTIVE",
                                "address": "10.0.1.89",
                                "protocol_port": 12345
                            }

Amphora is in ERROR status:
[2020-07-05 05:10:55] (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer amphora list
+--------------------------------------+--------------------------------------+--------+------------+---------------+------------+
| id                                   | loadbalancer_id                      | status | role       | lb_network_ip | ha_ip      |
+--------------------------------------+--------------------------------------+--------+------------+---------------+------------+
| f2d98200-32d1-4e95-91f9-d50e260ab06b | 259d9154-2aa0-4548-9fa9-c80df498926e | ERROR  | STANDALONE | 172.24.3.134  | 10.0.1.101 |
+--------------------------------------+--------------------------------------+--------+------------+---------------+------------+
[2020-07-05 05:11:15] (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer amphora show f2d98200-32d1-4e95-91f9-d50e260ab06b
+-----------------+--------------------------------------+
| Field           | Value                                |
+-----------------+--------------------------------------+
| id              | f2d98200-32d1-4e95-91f9-d50e260ab06b |
| loadbalancer_id | 259d9154-2aa0-4548-9fa9-c80df498926e |
| compute_id      | 959589e3-fb72-4453-9b2d-b3f5bba21227 |
| lb_network_ip   | 172.24.3.134                         |
| vrrp_ip         | 10.0.1.78                            |
| ha_ip           | 10.0.1.101                           |
| vrrp_port_id    | 03770d36-ad26-4c67-9be1-d4d5b8afe98e |
| ha_port_id      | 650875d5-9ffe-47b7-b858-d7f8ead9e076 |
| cert_expiration | 2020-08-01T10:28:12                  |
| cert_busy       | False                                |
| role            | STANDALONE                           |
| status          | ERROR                                |
| vrrp_interface  | None                                 |
| vrrp_id         | 1                                    |
| vrrp_priority   | None                                 |
| cached_zone     | nova                                 |
| created_at      | 2020-07-02T10:28:12                  |
| updated_at      | 2020-07-05T09:08:53                  |
| image_id        | b71aef86-6bee-41a6-9c8e-11dffb0e96c1 |
| compute_flavor  | 65                                   |
+-----------------+--------------------------------------+


Expected results:
Load balancer should be in ACTIVE provisioning status.

Additional info:
After several minutes, and after starting the member servers, load balancer goes to ERROR provisioning_status.

Will attach sosreports in next comment.

Comment 3 Carlos Goncalves 2020-07-06 08:36:43 UTC
This bug report reads mostly the same as bug #1723482 so probably a duplicate.

Comment 4 Bruna Bonguardo 2020-07-08 14:05:33 UTC

*** This bug has been marked as a duplicate of bug 1723482 ***