Bug 1718219 - bundle resources restarting when probing podman containers takes too much time
Summary: bundle resources restarting when probing podman containers takes too much time
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: resource-agents
Version: 8.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 8.0
Assignee: Oyvind Albrigtsen
QA Contact: pkomarov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-07 09:57 UTC by Damien Ciabrini
Modified: 2020-11-14 10:59 UTC (History)
12 users (show)

Fixed In Version: resource-agents-4.1.1-28.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-05 20:34:25 UTC
Type: Bug
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3307 0 None None None 2019-11-05 20:34:41 UTC

Description Damien Ciabrini 2019-06-07 09:57:54 UTC
Description of problem:
every 60s, pacemaker monitors the health of the bundle resources by probing whether the hosting containers are still running (i.e. "docker inspect") and whether we can exec a command into them (i.e. "docker exec").
The probe has a default timeout of 20s since OSP 12, and has proved largely sufficient under idle or loaded environment.

in OSP15, however, when podman containers are hosted in a FS under IO load, some commands like "podman inspect" may last for an unbounded/unpredictable long time, which makes the probe time out. In that case, pacemaker considers the container dead and tries to force-restart it, which breaks the control plane's service availability. 

Version-Release number of selected component (if applicable):
resource-agents-4.2.0-2.fc30.x86_64

How reproducible:
This has been reproduced on bare metal and VM
With either slow SATA disk (BM) or fast SSD (VM)
When there's some IO load applied on the system

Steps to Reproduce:
1. deploy an Openstack, with HA control plane

2. run an simple IO load on one controller
echo 2 > /proc/sys/vm/drop_caches
sync
dd if=/dev/zero of=bigtestfile bs=1M

3. wait for at most 60s until pacemaker starts probing a podman container.
pcs status

Actual results:
probes fail, pacemaker restarts bundles and break control plane service on the node where the IO load runs.

Expected results:
pacemaker successfully runs the probe and the cluster keeps running normally


Additional info:
One of the root causes is the podman RA which needlessly calls podman commands multiple times. In this BZ we'd like to track the optimizations that we can do in the podman resource agent.

Comment 9 Oneata Mircea Teodor 2019-07-29 14:38:27 UTC
This bug has been copied as 8.0.0 z-stream bug#1734062 and now must be resolved in the current update release, set blocker flag.

Comment 10 pkomarov 2019-09-03 19:06:21 UTC
Verified , 
https://bugzilla.redhat.com/show_bug.cgi?id=1734062#c4

Comment 12 errata-xmlrpc 2019-11-05 20:34:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3307


Note You need to log in before you can comment on or make changes to this bug.