Description of problem:
every 60s, pacemaker monitors the health of the bundle resources by probing whether the hosting containers are still running (i.e. "docker inspect") and whether we can exec a command into them (i.e. "docker exec").
The probe has a default timeout of 20s since OSP 12, and has proved largely sufficient under idle or loaded environment.
in OSP15, however, when podman containers are hosted in a FS under IO load, some commands like "podman inspect" may last for an unbounded/unpredictable long time, which makes the probe time out. In that case, pacemaker considers the container dead and tries to force-restart it, which breaks the control plane's service availability.
Version-Release number of selected component (if applicable):
This has been reproduced on bare metal and VM
With either slow SATA disk (BM) or fast SSD (VM)
When there's some IO load applied on the system
Steps to Reproduce:
1. deploy an Openstack, with HA control plane
2. run an simple IO load on one controller
echo 2 > /proc/sys/vm/drop_caches
dd if=/dev/zero of=bigtestfile bs=1M
3. wait for at most 60s until pacemaker starts probing a podman container.
probes fail, pacemaker restarts bundles and break control plane service on the node where the IO load runs.
pacemaker successfully runs the probe and the cluster keeps running normally
One of the root causes is the podman RA which needlessly calls podman commands multiple times. In this BZ we'd like to track the optimizations that we can do in the podman resource agent.
This bug has been copied as 8.0.0 z-stream bug#1734062 and now must be resolved in the current update release, set blocker flag.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.