Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1796216

Summary: Even when all VMs on the compute node are deleted the haproxy sidecar container is not torndown
Product: Red Hat OpenStack Reporter: Sai Sindhur Malleni <smalleni>
Component: python-networking-ovnAssignee: Jakub Libosvar <jlibosva>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Eran Kuris <ekuris>
Severity: high Docs Contact:
Priority: high    
Version: 16.0 (Train)CC: apevec, beagles, bperkins, dalvarez, jlibosva, lhh, majopela, scohen, vkommadi
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-12 07:33:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sai Sindhur Malleni 2020-01-29 21:30:07 UTC
Description of problem:

When the last VM on the compute node is deleted/migrated, the haproxy sidecar container also must be terminated as per https://bugzilla.redhat.com/show_bug.cgi?id=1633594#c12

However, the haproxy sidecar container is not being terminated when all VMs on the hypervisor are deleted.
[root@overcloud-u1029compute-1 heat-admin]# podman ps
CONTAINER ID  IMAGE                                                                             COMMAND               CREATED         STATUS             PORTS  NAMES
7d8bd48a504a  192.168.0.1:8787/rh-osbs/rhosp16-openstack-neutron-metadata-agent-ovn:20200110.1  /bin/bash -c HAPR...  47 minutes ago  Up 47 minutes ago         neutron-haproxy-ovnmeta-515fcb28-827b-4127-a127-b8ab3234fba1
fbb3b5b584e3  192.168.0.1:8787/rh-osbs/rhosp16-openstack-nova-compute:20200110.1                kolla_start           12 days ago     Up 12 days ago            nova_compute
accda3b2e9f0  192.168.0.1:8787/rh-osbs/rhosp16-openstack-neutron-metadata-agent-ovn:20200110.1  kolla_start           12 days ago     Up 12 days ago            ovn_metadata_agent
8bd5cddad943  192.168.0.1:8787/rh-osbs/rhosp16-openstack-ovn-controller:20200110.1              kolla_start           12 days ago     Up 12 days ago            ovn_controller
1ba449eb4f27  192.168.0.1:8787/rh-osbs/rhosp16-openstack-nova-compute:20200110.1                kolla_start           12 days ago     Up 12 days ago            nova_migration_target
9fc58fd3e45c  192.168.0.1:8787/rh-osbs/rhosp16-openstack-cron:20200110.1                        kolla_start           12 days ago     Up 12 days ago            logrotate_crond
922d9204dd6b  192.168.0.1:8787/rh-osbs/rhosp16-openstack-iscsid:20200110.1                      kolla_start           12 days ago     Up 12 days ago            iscsid
0f6effc7a4e0  192.168.0.1:8787/rh-osbs/rhosp16-openstack-nova-libvirt:20200110.1                kolla_start           12 days ago     Up 12 days ago            nova_libvirt
a45043fbc2ec  192.168.0.1:8787/rh-osbs/rhosp16-openstack-nova-libvirt:20200110.1                kolla_start           12 days ago     Up 12 days ago            nova_virtlogd
[root@overcloud-u1029compute-1 heat-admin]# virsh list --all
 Id    Name                           State
----------------------------------------------------

[root@overcloud-u1029compute-1 heat-admin]# podman exec -it neutron-haproxy-ovnmeta-515fcb28-827b-4127-a127-b8ab3234fba1 bash
()[root@overcloud-u1029compute-1 /]# ls
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  openstack  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
()[root@overcloud-u1029compute-1 /]# vi /var/log/
btmp             dnf.log          hawkey.log       lastlog          openvswitch/     puppet/          rhsm/            
dnf.librepo.log  dnf.rpm.log      kolla/           neutron/         private/         README           wtmp             
()[root@overcloud-u1029compute-1 /]# vi /var/l
lib/   local/ lock/  log/   
()[root@overcloud-u1029compute-1 /]# vi /var/l
lib/   local/ lock/  log/   
()[root@overcloud-u1029compute-1 /]# vi /var/l
lib/   local/ lock/  log/   
()[root@overcloud-u1029compute-1 /]# exit
exit
[root@overcloud-u1029compute-1 heat-admin]# virsh list --all^C
[root@overcloud-u1029compute-1 heat-admin]# podman logs neutron-haproxy-ovnmeta-515fcb28-827b-4127-a127-b8ab3234fba1
[WARNING] 028/201028 (994712) : Exiting Master process...
[ALERT] 028/201028 (994712) : Current worker 994717 exited with code 143
[WARNING] 028/201028 (994712) : All workers exited. Exiting... (143)
[WARNING] 028/202546 (999281) : Exiting Master process...
[ALERT] 028/202546 (999281) : Current worker 999283 exited with code 143
[WARNING] 028/202546 (999281) : All workers exited. Exiting... (143)


Version-Release number of selected component (if applicable):
16

How reproducible:
100%

Steps to Reproduce:
1. Run VM on the compute node
2. Delete VM
3.

Actual results:
Haproxy Sidecar container keeps running

Expected results:
Haproxy sidecar container must terminate

Additional info:

Comment 1 Daniel Alvarez Sanchez 2020-01-30 08:37:59 UTC
@Brent, could this be related to the sidecar scripts and the issue you're working on?

On the other hand, it's hard to tell without logs. @Sai, do you have logs handy for this or can you reproduce somehow so that we can check it?

Comment 2 Sai Sindhur Malleni 2020-02-03 03:21:45 UTC
What logs specifically? Also, I think it is straightforward to reproduce as it happens whenever all VMs on the hypervisor are deleted.

Comment 3 Jakub Libosvar 2020-02-04 16:20:00 UTC
I tried to reproduce and wasn't able to reproduce. I had two VMs, each in different network on a compute node. i.e. two haproxy sidecars were spawned. After I deleted the VMs, both sidecars were removed. Sai, we need metadata agent logs from the compute where haproxy was not removed correctly and perhaps neutron servers logs.

Comment 4 Jakub Libosvar 2020-04-14 07:50:44 UTC
As we have no logs and we can't reproduce, it's hard to tell what is the culprit. There are patches related to metadata side car containers: https://review.opendev.org/#/c/713395/ and https://review.opendev.org/#/c/716664/

I'll flip the BZ as test-only after they are merged upstream and imported downstream.

Comment 5 Jakub Libosvar 2020-04-14 09:04:21 UTC
*** Bug 1822275 has been marked as a duplicate of this bug. ***

Comment 6 Sai Sindhur Malleni 2020-07-10 13:26:33 UTC
Couldn't reproduce this later. So I guess it's OK to be closed?

Comment 7 Jakub Libosvar 2020-07-12 07:33:26 UTC
Closing as per comment 6