RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1738303 - podman: containers may get stopped by systemd instead of pacemaker on shutdown [rhel-8.0.0.z]
Summary: podman: containers may get stopped by systemd instead of pacemaker on shutdow...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: resource-agents
Version: 8.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 8.1
Assignee: Oyvind Albrigtsen
QA Contact: pkomarov
URL:
Whiteboard:
Depends On: 1736746
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-06 17:39 UTC by Oneata Mircea Teodor
Modified: 2020-11-14 09:19 UTC (History)
9 users (show)

Fixed In Version: resource-agents-4.1.1-17.el8_0.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1736746
Environment:
Last Closed: 2019-09-10 13:13:15 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2700 0 None None None 2019-09-10 13:13:19 UTC

Comment 3 Damien Ciabrini 2019-08-07 14:15:31 UTC
Two way of testing it:
A. if you run on a standalone pcs cluster
B. if you run an openstack overcloud



A. Instruction to test on a standalone cluster.
-----------------------------------------------
0. On all cluster nodes, download a container image to work with

podman pull --tls-verify=false brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhosp15/openstack-mariadb:latest

1. Create a dummy resource in a bundle on a rhel8 cluster

pcs resource bundle create dummy-bundle container podman image=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhosp15/openstack-mariadb:latest network=host options="--user=root --log-driver=journald" run-command="/usr/sbin/pacemaker_remoted" network control-port=3123 storage-map id=map0 source-dir=/dev/log target-dir=/dev/log storage-map id=map1 source-dir=/dev/zero target-dir=/etc/libqb/force-filesystem-sockets options=ro storage-map id=pcmk1 source-dir=/var/log/pacemaker target-dir=/var/log/pacemaker options=rw --disabled
pcs resource create dummy ocf:pacemaker:Dummy meta  bundle dummy-bundle
pcs resource enable dummy-bundle

2. Verify that pacemaker created a podman container

[root@vm1 ~]# podman ps --filter name=dummy-bundle
CONTAINER ID  IMAGE                                                                                  COMMAND               CREATED         STATUS             PORTS  NAMES
06bbe9198803  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhosp15/openstack-mariadb:latest  dumb-init -- /usr...  36 seconds ago  Up 36 seconds ago         dummy-bundle-podman-0

3. Verify that runc and podman create additional transient scope files

[root@vm1 ~]# ls /run/systemd/transient/libpod*$(podman ps -q --filter name=dummy-bundle)*
/run/systemd/transient/libpod-06bbe9198803b3b934db87ee0fe6b723d9239a8a94ea01fc8cb3e86dca5f22bd.scope
/run/systemd/transient/libpod-conmon-06bbe9198803b3b934db87ee0fe6b723d9239a8a94ea01fc8cb3e86dca5f22bd.scope

4. Delete the container

pcs resource disable dummy-bundle

5. On all cluster nodes, enable the new drop-in feature

touch /etc/sysconfig/podman_drop_in

6. Restart the container

pcs resource enable dummy-bundle

7. Verify that now the podman resource agent create additional directories and deps for the scopes.

[root@vm1 ~]# podman ps --filter name=dummy-bundle
CONTAINER ID  IMAGE                                                                                  COMMAND               CREATED        STATUS           PORTS  NAMES
dd8e31d229c7  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhosp15/openstack-mariadb:latest  dumb-init -- /usr...  2 seconds ago  Up 1 second ago         dummy-bundle-podman-0
[root@vm1 ~]#  ls /run/systemd/transient/libpod*$(podman ps -q --filter name=dummy-bundle)*
/run/systemd/transient/libpod-conmon-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope
/run/systemd/transient/libpod-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope

/run/systemd/transient/libpod-conmon-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope.d:
dep.conf

/run/systemd/transient/libpod-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope.d:
dep.conf

8. Verify that systemd has correctly taken the drop-in dependencies into account

[root@vm1 ~]# systemctl cat libpod*$(podman ps -q --filter name=dummy-bundle)*
# /run/systemd/transient/libpod-conmon-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Scope]
Slice=machine.slice
Delegate=yes

[Unit]
DefaultDependencies=no

# /run/systemd/transient/libpod-conmon-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope.d/dep.conf
[Unit]
Before=pacemaker.service

# /run/systemd/transient/libpod-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Unit]
Description=libcontainer container dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055

[Scope]
Slice=machine.slice
Delegate=yes
MemoryAccounting=yes
CPUAccounting=yes
BlockIOAccounting=yes

[Unit]
DefaultDependencies=no

# /run/systemd/transient/libpod-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope.d/dep.conf
[Unit]
Before=pacemaker.service

9. Verify that on reboot, systemd stops pacemaker before trying to stop any of the scopes.

Aug  7 16:06:09 vm1 systemd[1]: Stopping Pacemaker High Availability Cluster Manager...
Aug  7 16:06:09 vm1 pacemakerd[14765]: notice: Shutting down Pacemaker
Aug  7 16:06:09 vm1 pacemakerd[14765]: notice: Stopping pacemaker-controld
Aug  7 16:06:09 vm1 pacemaker-controld[14771]: notice: Shutting down cluster resource manager
Aug  7 16:06:10 vm1 pacemaker-controld[14771]: notice: Result of stop operation for dummy on dummy-bundle-0: 0 (ok)
Aug  7 16:06:10 vm1 pacemaker-controld[14771]: notice: Node dummy-bundle-0 state is now lost
Aug  7 16:06:10 vm1 pacemaker-controld[14771]: notice: Result of stop operation for dummy-bundle-0 on vm1: 0 (ok)
Aug  7 16:06:10 vm1 pacemaker-attrd[14769]: notice: Removing all dummy-bundle-0 attributes for peer vm1
Aug  7 16:06:10 vm1 pacemaker-remoted[6]: notice: Cleaning up after remote client pacemaker-remote-vm1:3123 disconnected
Aug  7 16:06:10 vm1 pacemaker-fenced[14767]: notice: Node dummy-bundle-0 state is now lost
Aug  7 16:06:10 vm1 pacemaker-remoted[6]: notice: Caught 'Terminated' signal
Aug  7 16:06:10 vm1 systemd[1]: Stopped libcontainer container dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.
Aug  7 16:06:10 vm1 systemd[1]: libpod-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope: Consumed 13.963s CPU time
Aug  7 16:06:10 vm1 systemd[1]: Unmounted /var/lib/containers/storage/overlay-containers/dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055/userdata/shm.
Aug  7 16:06:10 vm1 systemd[1]: Unmounted /var/lib/containers/storage/overlay/0c79452b983bed523d2b154f15a70747470b7e23becbfdd814c8f2ad48b3048c/merged.
Aug  7 16:06:10 vm1 podman(dummy-bundle-podman-0)[18790]: INFO: dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055
Aug  7 16:06:10 vm1 systemd[1]: Unmounted /var/lib/containers/storage/overlay.
Aug  7 16:06:10 vm1 systemd[1]: Stopped libpod-conmon-dd8e31d229c7e25e72992c05cb8804643143f2d9e6c7144ee521b2adb622a055.scope.


10. Verify than when you remove the container, the drop-in directories are removed automatically by systemd

[root@vm2 ~]# podman ps -aq --filter name=dummy-bundle
cc8d6d0c9197
[root@vm2 ~]# ls -1d /run/systemd/transient/libpod-*cc8d6d0c9197*
/run/systemd/transient/libpod-cc8d6d0c9197e12c7908c1133a39ce4d56aa85d5e5b8220c11823abbcfc63a38.scope
/run/systemd/transient/libpod-cc8d6d0c9197e12c7908c1133a39ce4d56aa85d5e5b8220c11823abbcfc63a38.scope.d
/run/systemd/transient/libpod-conmon-cc8d6d0c9197e12c7908c1133a39ce4d56aa85d5e5b8220c11823abbcfc63a38.scope
/run/systemd/transient/libpod-conmon-cc8d6d0c9197e12c7908c1133a39ce4d56aa85d5e5b8220c11823abbcfc63a38.scope.d
[root@vm2 ~]# pcs resource disable dummy-bundle
[root@vm2 ~]# ls -1d /run/systemd/transient/libpod-*cc8d6d0c9197*
ls: cannot access '/run/systemd/transient/libpod-*cc8d6d0c9197*': No such file or directory


----------------------------------------

B. Instruction to test on an Openstack cluster

The latest cluster should have the drop-in option enabled by default, look for file /etc/sysconfig/podman_drop_in on the controller nodes.

Reproduce the same tests as in A, but with another bundle, e.g. galera-bundle

Comment 4 pkomarov 2019-08-25 18:58:39 UTC
Verified , 


[stack@undercloud-0 ~]$ ansible overcloud_nodes -b -mshell -a'rpm -qa|grep resource-agents'
 [WARNING]: Found both group and host with same name: undercloud

 [WARNING]: Consider using the yum, dnf or zypper module rather than running 'rpm'.  If you need to use command because yum, dnf or
zypper is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of
this message.

messaging-0 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

compute-0 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

controller-2 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

controller-0 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

controller-1 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

messaging-1 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

messaging-2 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

database-0 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

database-1 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64

database-2 | CHANGED | rc=0 >>
resource-agents-4.1.1-17.el8_0.5.x86_64


[stack@undercloud-0 ~]$ ansible database -mshell -b -a'podman ps --filter name=galera-bundle'
 [WARNING]: Found both group and host with same name: undercloud

database-1 | CHANGED | rc=0 >>
CONTAINER ID  IMAGE                                                   COMMAND               CREATED      STATUS          PORTS  NAMES
b30795498639  192.168.24.1:8787/rhosp15/openstack-mariadb:20190725.1  dumb-init -- /bin...  5 hours ago  Up 5 hours ago         galera-bundle-podman-0

database-0 | CHANGED | rc=0 >>
CONTAINER ID  IMAGE                                                   COMMAND               CREATED      STATUS          PORTS  NAMES
e1305bed957b  192.168.24.1:8787/rhosp15/openstack-mariadb:20190725.1  dumb-init -- /bin...  6 hours ago  Up 6 hours ago         galera-bundle-podman-2

database-2 | CHANGED | rc=0 >>
CONTAINER ID  IMAGE                                                   COMMAND               CREATED      STATUS          PORTS  NAMES
3f6c7f4471da  192.168.24.1:8787/rhosp15/openstack-mariadb:20190725.1  dumb-init -- /bin...  6 hours ago  Up 6 hours ago         galera-bundle-podman-1


[stack@undercloud-0 ~]$ ansible database -mshell -b -a'ls /run/systemd/transient/libpod*$(podman ps -q --filter name=galera-bundle)*'
 [WARNING]: Found both group and host with same name: undercloud

database-0 | CHANGED | rc=0 >>
/run/systemd/transient/libpod-conmon-e1305bed957bc84a9199e17ba63d030bf9017dac13ae5d5c87d36126ab4eb0b2.scope
/run/systemd/transient/libpod-e1305bed957bc84a9199e17ba63d030bf9017dac13ae5d5c87d36126ab4eb0b2.scope

database-1 | CHANGED | rc=0 >>
/run/systemd/transient/libpod-b3079549863931a8f750b818e12137bf3c00312cf7e6238162118b53ae6dba9f.scope
/run/systemd/transient/libpod-conmon-b3079549863931a8f750b818e12137bf3c00312cf7e6238162118b53ae6dba9f.scope

database-2 | CHANGED | rc=0 >>
/run/systemd/transient/libpod-3f6c7f4471da95fca398855526ceb7cb9441184ebe5e0c3567b88f5f7a56169c.scope
/run/systemd/transient/libpod-conmon-3f6c7f4471da95fca398855526ceb7cb9441184ebe5e0c3567b88f5f7a56169c.scope


[stack@undercloud-0 ~]$ ansible controller-0 -mshell -b -a'pcs resource disable galera-bundle'
 [WARNING]: Found both group and host with same name: undercloud

controller-0 | CHANGED | rc=0 >>


[stack@undercloud-0 ~]$ ansible controller-0 -mshell -b -a'pcs status|grep galera-bundle'
 [WARNING]: Found both group and host with same name: undercloud

controller-0 | CHANGED | rc=0 >>
 podman container set: galera-bundle [192.168.24.1:8787/rhosp15/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Stopped (disabled)
   galera-bundle-1	(ocf::heartbeat:galera):	Stopped (disabled)
   galera-bundle-2	(ocf::heartbeat:galera):	Stopped (disabled)

[stack@undercloud-0 ~]$ ansible overcloud_nodes -mshell -b -a'touch /etc/sysconfig/podman_drop_in;ls /etc/sysconfig/podman_drop_in'
 [WARNING]: Found both group and host with same name: undercloud

 [WARNING]: Consider using the file module with state=touch rather than running 'touch'.  If you need to use command because file is
insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.

messaging-0 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

compute-0 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

controller-1 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

controller-2 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

controller-0 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

messaging-2 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

messaging-1 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

database-0 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

database-1 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in

database-2 | CHANGED | rc=0 >>
/etc/sysconfig/podman_drop_in



[stack@undercloud-0 ~]$ ansible controller-0 -mshell -b -a'pcs resource enable galera-bundle;sleep 5s;pcs status|grep galera-bundle'
 [WARNING]: Found both group and host with same name: undercloud

controller-0 | CHANGED | rc=0 >>
GuestOnline: [ galera-bundle-0@overcloud-controller-1 galera-bundle-1@overcloud-controller-1 galera-bundle-2@overcloud-controller-1 ovn-dbs-bundle-0@overcloud-controller-0 ovn-dbs-bundle-1@overcloud-controller-2 ovn-dbs-bundle-2@overcloud-controller-1 rabbitmq-bundle-0@overcloud-controller-1 rabbitmq-bundle-1@overcloud-controller-1 rabbitmq-bundle-2@overcloud-controller-1 redis-bundle-0@overcloud-controller-1 redis-bundle-1@overcloud-controller-0 redis-bundle-2@overcloud-controller-2 ]
 podman container set: galera-bundle [192.168.24.1:8787/rhosp15/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Stopped overcloud-database-0
   galera-bundle-1	(ocf::heartbeat:galera):	Stopped overcloud-database-1
   galera-bundle-2	(ocf::heartbeat:galera):	Stopped overcloud-database-2

[stack@undercloud-0 ~]$ ansible controller-0 -mshell -b -a'pcs status|grep galera-bundle'
 [WARNING]: Found both group and host with same name: undercloud

controller-0 | CHANGED | rc=0 >>
GuestOnline: [ galera-bundle-0@overcloud-controller-1 galera-bundle-1@overcloud-controller-1 galera-bundle-2@overcloud-controller-1 ovn-dbs-bundle-0@overcloud-controller-0 ovn-dbs-bundle-1@overcloud-controller-2 ovn-dbs-bundle-2@overcloud-controller-1 rabbitmq-bundle-0@overcloud-controller-1 rabbitmq-bundle-1@overcloud-controller-1 rabbitmq-bundle-2@overcloud-controller-1 redis-bundle-0@overcloud-controller-1 redis-bundle-1@overcloud-controller-0 redis-bundle-2@overcloud-controller-2 ]
 podman container set: galera-bundle [192.168.24.1:8787/rhosp15/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Promoting overcloud-database-0
   galera-bundle-1	(ocf::heartbeat:galera):	Master overcloud-database-1
   galera-bundle-2	(ocf::heartbeat:galera):	Master overcloud-database-2

[stack@undercloud-0 ~]$ ansible controller-0 -mshell -b -a'pcs status|grep galera-bundle'
 [WARNING]: Found both group and host with same name: undercloud

controller-0 | CHANGED | rc=0 >>
GuestOnline: [ galera-bundle-0@overcloud-controller-1 galera-bundle-1@overcloud-controller-1 galera-bundle-2@overcloud-controller-1 ovn-dbs-bundle-0@overcloud-controller-0 ovn-dbs-bundle-1@overcloud-controller-2 ovn-dbs-bundle-2@overcloud-controller-1 rabbitmq-bundle-0@overcloud-controller-1 rabbitmq-bundle-1@overcloud-controller-1 rabbitmq-bundle-2@overcloud-controller-1 redis-bundle-0@overcloud-controller-1 redis-bundle-1@overcloud-controller-0 redis-bundle-2@overcloud-controller-2 ]
 podman container set: galera-bundle [192.168.24.1:8787/rhosp15/openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Master overcloud-database-0
   galera-bundle-1	(ocf::heartbeat:galera):	Master overcloud-database-1
   galera-bundle-2	(ocf::heartbeat:galera):	Master overcloud-database-2

[stack@undercloud-0 ~]$ ansible database -mshell -b -a'podman ps --filter name=galera-bundle'
 [WARNING]: Found both group and host with same name: undercloud

database-2 | CHANGED | rc=0 >>
CONTAINER ID  IMAGE                                                   COMMAND               CREATED        STATUS            PORTS  NAMES
65aab860bd16  192.168.24.1:8787/rhosp15/openstack-mariadb:20190725.1  dumb-init -- /bin...  2 minutes ago  Up 2 minutes ago         galera-bundle-podman-2

database-0 | CHANGED | rc=0 >>
CONTAINER ID  IMAGE                                                   COMMAND               CREATED        STATUS            PORTS  NAMES
c7d560ce484a  192.168.24.1:8787/rhosp15/openstack-mariadb:20190725.1  dumb-init -- /bin...  2 minutes ago  Up 2 minutes ago         galera-bundle-podman-0

database-1 | CHANGED | rc=0 >>
CONTAINER ID  IMAGE                                                   COMMAND               CREATED        STATUS            PORTS  NAMES
324ee8f480f8  192.168.24.1:8787/rhosp15/openstack-mariadb:20190725.1  dumb-init -- /bin...  2 minutes ago  Up 2 minutes ago         galera-bundle-podman-1

[stack@undercloud-0 ~]$ ansible database -mshell -b -a'ls /run/systemd/transient/libpod*$(podman ps -q --filter name=galera-bundle)*'
 [WARNING]: Found both group and host with same name: undercloud

database-0 | CHANGED | rc=0 >>
/run/systemd/transient/libpod-c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05.scope
/run/systemd/transient/libpod-conmon-c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05.scope

/run/systemd/transient/libpod-c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05.scope.d:
dep.conf

/run/systemd/transient/libpod-conmon-c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05.scope.d:
dep.conf

database-1 | CHANGED | rc=0 >>
/run/systemd/transient/libpod-324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e.scope
/run/systemd/transient/libpod-conmon-324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e.scope

/run/systemd/transient/libpod-324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e.scope.d:
dep.conf

/run/systemd/transient/libpod-conmon-324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e.scope.d:
dep.conf

database-2 | CHANGED | rc=0 >>
/run/systemd/transient/libpod-65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f.scope
/run/systemd/transient/libpod-conmon-65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f.scope

/run/systemd/transient/libpod-65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f.scope.d:
dep.conf

/run/systemd/transient/libpod-conmon-65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f.scope.d:
dep.conf


[stack@undercloud-0 ~]$ ansible database -mshell -b -a'systemctl cat libpod*$(podman ps -q --filter name=galera-bundle)*'
 [WARNING]: Found both group and host with same name: undercloud

database-0 | CHANGED | rc=0 >>
# /run/systemd/transient/libpod-conmon-c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05.scope
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Scope]
Slice=machine.slice
Delegate=yes

[Unit]
DefaultDependencies=no

# /run/systemd/transient/libpod-conmon-c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05.scope.d/dep.conf
[Unit]
Before=pacemaker.service

# /run/systemd/transient/libpod-c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05.scope
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Unit]
Description=libcontainer container c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05

[Scope]
Slice=machine.slice
Delegate=yes
MemoryAccounting=yes
CPUAccounting=yes
BlockIOAccounting=yes

[Unit]
DefaultDependencies=yes

# /run/systemd/transient/libpod-c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05.scope.d/dep.conf
[Unit]
Before=pacemaker.service

database-1 | CHANGED | rc=0 >>
# /run/systemd/transient/libpod-conmon-324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e.scope
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Scope]
Slice=machine.slice
Delegate=yes

[Unit]
DefaultDependencies=no

# /run/systemd/transient/libpod-conmon-324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e.scope.d/dep.conf
[Unit]
Before=pacemaker.service

# /run/systemd/transient/libpod-324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e.scope
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Unit]
Description=libcontainer container 324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e

[Scope]
Slice=machine.slice
Delegate=yes
MemoryAccounting=yes
CPUAccounting=yes
BlockIOAccounting=yes

[Unit]
DefaultDependencies=yes

# /run/systemd/transient/libpod-324ee8f480f81b69761c899115be0866bc325a3f8305d9d01f8e611217073c1e.scope.d/dep.conf
[Unit]
Before=pacemaker.service

database-2 | CHANGED | rc=0 >>
# /run/systemd/transient/libpod-conmon-65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f.scope
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Scope]
Slice=machine.slice
Delegate=yes

[Unit]
DefaultDependencies=no

# /run/systemd/transient/libpod-conmon-65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f.scope.d/dep.conf
[Unit]
Before=pacemaker.service

# /run/systemd/transient/libpod-65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f.scope
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Unit]
Description=libcontainer container 65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f

[Scope]
Slice=machine.slice
Delegate=yes
MemoryAccounting=yes
CPUAccounting=yes
BlockIOAccounting=yes

[Unit]
DefaultDependencies=yes

# /run/systemd/transient/libpod-65aab860bd16c16c1a949a62a5da5f62ca26a164d89bd5d343523068546ff49f.scope.d/dep.conf
[Unit]
Before=pacemaker.service


[root@overcloud-database-0 ~]# journalctl -b -1 |grep -A 99999 'Aug 25 18:30:29'|grep 'pacemaker\|galera'
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted[852298]: notice: Caught 'Terminated' signal
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted[852298]: notice: TLS server session ended
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted[403303]: notice: Caught 'Terminated' signal
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted[403303]: notice: TLS server session ended
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted[852298]: notice: Caught 'Terminated' signal
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted[852298]: notice: Waiting for cluster to stop resources before exiting
Aug 25 18:30:31 overcloud-database-0 galera(galera)[875725]: DEBUG: MySQL still hasn't stopped yet. Waiting...
Aug 25 18:30:32 overcloud-database-0 galera(galera)[875731]: DEBUG: MySQL still hasn't stopped yet. Waiting...
Aug 25 18:30:33 overcloud-database-0 galera(galera)[875741]: DEBUG: MySQL still hasn't stopped yet. Waiting...
Aug 25 18:30:33 overcloud-database-0 galera(galera)[875745]: INFO: MySQL stopped
Aug 25 18:30:34 overcloud-database-0 galera(galera)[875759]: INFO: attempting to read safe_to_bootstrap flag from /var/lib/mysql/grastate.dat
Aug 25 18:30:34 overcloud-database-0 galera(galera)[875766]: INFO: attempting to detect last commit version by reading /var/lib/mysql/grastate.dat
Aug 25 18:30:34 overcloud-database-0 galera(galera)[875773]: INFO: Last commit version found:  1805208
Aug 25 18:30:35 overcloud-database-0 galera(galera)[875829]: INFO: MySQL is not running
Aug 25 18:30:36 overcloud-database-0 pacemaker-remoted[852298]: notice: Cleaning up after remote client pacemaker-remote-overcloud-database-0:3123 disconnected
Aug 25 18:30:36 overcloud-database-0 podman(galera-bundle-podman-0)[875891]: NOTICE: Cleaning up inactive container, galera-bundle-podman-0.
Aug 25 18:30:36 overcloud-database-0 podman(galera-bundle-podman-0)[875911]: INFO: c7d560ce484a019d286b23c8c0d608d10e9c9cd92d042e5a8467ca11f82d3b05
Aug 25 18:30:36 overcloud-database-0 podman(galera-bundle-podman-0)[875915]: DEBUG: galera-bundle-podman-0 stop : 0
Aug 25 18:30:36 overcloud-database-0 pacemaker-remoted[403303]: notice: Cleaning up after remote client pacemaker-remote-172.17.1.132:3121 disconnected


[root@overcloud-database-0 ~]# cat /var/log/pacemaker/pacemaker.log|grep -i 'pacemaker\|galera'|grep '18:3'|head
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted   [403303] (crm_signal_dispatch) 	notice: Caught 'Terminated' signal | 15 (invoking handler)
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted   [403303] (lrmd_shutdown) 	info: Sending shutdown request to cluster
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted   [403303] (lrmd_remote_connection_destroy) 	notice: TLS server session ended
Aug 25 18:30:29 overcloud-database-0 pacemaker-remoted   [403303] (handle_shutdown_ack) 	info: Received shutdown ack
Aug 25 18:30:36 overcloud-database-0 pacemaker-remoted   [403303] (cancel_recurring_action) 	info: Cancelling ocf operation galera-bundle-podman-0_monitor_60000
Aug 25 18:30:36 overcloud-database-0 pacemaker-remoted   [403303] (log_execute) 	info: executing - rsc:galera-bundle-podman-0 action:stop call_id:1113
Aug 25 18:30:36 overcloud-database-0 pacemaker-remoted   [403303] (log_finished) 	info: finished - rsc:galera-bundle-podman-0 action:stop call_id:1113 pid:875851 exit-code:0 exec-time:188ms queue-time:1ms
Aug 25 18:30:36 overcloud-database-0 pacemaker-remoted   [403303] (lrmd_remote_client_msg) 	info: Remote client disconnected while reading from it
Aug 25 18:30:36 overcloud-database-0 pacemaker-remoted   [403303] (lrmd_remote_client_destroy) 	notice: Cleaning up after remote client pacemaker-remote-172.17.1.132:3121 disconnected | id=35ccc46b-c3ab-4adf-a849-82fad5b9cce6
Aug 25 18:30:36 overcloud-database-0 pacemaker-remoted   [403303] (lrmd_exit) 	info: Terminating with 0 clients

Comment 6 errata-xmlrpc 2019-09-10 13:13:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2700


Note You need to log in before you can comment on or make changes to this bug.