RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1652053 - Refreshing an unmanaged resource does not clean up its state entirely
Summary: Refreshing an unmanaged resource does not clean up its state entirely
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.6
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 7.7
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard: PMApproved
Depends On:
Blocks: 1664242
TreeView+ depends on / blocked
 
Reported: 2018-11-21 13:37 UTC by Damien Ciabrini
Modified: 2019-08-06 12:54 UTC (History)
8 users (show)

Fixed In Version: pacemaker-1.1.20-1.el7
Doc Type: Bug Fix
Doc Text:
Previously, the "pcs resource refresh" command or the "pcs resource cleanup" command with a failed resource sometimes failed to wait for results from all nodes. As a consequence, those resources were not cleaned on all nodes. With this update, the problem has been fixed, and Pacemaker now cleans resources on all nodes.
Clone Of:
: 1664242 (view as bug list)
Environment:
Last Closed: 2019-08-06 12:53:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
crm_reports taken after running the idiom (422.08 KB, application/x-bzip)
2018-11-21 13:37 UTC, Damien Ciabrini
no flags Details
log of crm_resource --refresh (277.31 KB, text/plain)
2018-11-21 13:39 UTC, Damien Ciabrini
no flags Details
cat logs | grep -v -e get_xpath_object > logs2 (185.24 KB, text/plain)
2018-11-21 13:40 UTC, Damien Ciabrini
no flags Details
galera config for all cluster nodes (669 bytes, text/plain)
2018-11-21 13:41 UTC, Damien Ciabrini
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2129 0 None None None 2019-08-06 12:54:11 UTC

Description Damien Ciabrini 2018-11-21 13:37:34 UTC
Created attachment 1507690 [details]
crm_reports taken after running the idiom

Description of problem:

In OpenStack we have an idiomatic way to force a resource parameter
update without restarting the resource. This is used when we know we
don't require a immediate service restart and we want to avoid service
disruption.

We used to do the following high level step, e.g. for galera:
  . unmanage the resource
  . update the resource's property
  . refresh the resource to force pacemaker to reprobe the
    resource's state and forget about past operations
  . manage the resource

Currently in RHEL7.6, this does not prevent the resource from restarting
any longer. This causes unwanted resource restarts in our workload for
various scenarios (config update, cluster node replacement...)


Version-Release number of selected component (if applicable):
pacemaker-1.1.19-8.el7_6.1.x86_64

How reproducible:
Always

Steps to Reproduce:
0. create a 3-node cluster on three hosts called ra1, ra2, ra3

1. install the galera config provided in attachement on all cluster nodes
mkdir /etc/my.cnf.d
# copy galera.cnf attachment in /etc/my.cnf.d/galera.cnf

2. create a bundled galera master/slave resource on a cluster
pcs resource bundle create galera-bundle container docker image=docker.io/tripleoqueens/centos-binary-mariadb:current-tripleo-rdo network=host options="--user=root --log-driver=journald" replicas=3 masters=3 run-command="/usr/sbin/pacemaker_remoted" network control-port=3123 storage-map id=map0 source-dir=/dev/log target-dir=/dev/log storage-map id=map1 source-dir=/dev/zero target-dir=/etc/libqb/force-filesystem-sockets options=ro storage-map id=map2 source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=map3 source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=map4 source-dir=/etc/my.cnf.d target-dir=/etc/my.cnf.d options=ro storage-map id=map5 source-dir=/var/lib/mysql target-dir=/var/lib/mysql options=rw storage-map id=map6 source-dir=/var/log/mysql target-dir=/var/log/mysql options=rw storage-map id=pcmk1 source-dir=/var/log/pacemaker target-dir=/var/log/pacemaker options=rw --disabled

pcs resource create galera ocf:heartbeat:galera wsrep_cluster_address='gcomm://ra1,ra2,ra3' op promote timeout=60 on-fail=block meta container-attribute-target=host notify=true bundle galera-bundle

pcs resource enable galera-bundle

3. run the resource update idiom
pcs resource unmanage galera-bundle
pcs resource update galera additional_parameters=--open-files-limit=2048
pcs resource refresh galera-bundle
pcs resource manage galera-bundle

Actual results:
After the refresh, the three galera replicas are restarted

Expected results:
The galera replicas should not restart

Additional info:
quoting beekhof after he had a quick look at the attached crm_reports:
"""
<beekhof> the basic problem appears to be that start_mainloop() is not actually starting mainloop
<beekhof> allowing crm_resource to terminate early and causing the crmd to drop any commands it sent
<beekhof> maybe crmd_replies_needed needs to be declared volitile
<beekhof> because the code looks correct
"""

In addition to the crm_reports grabbed after running the update idiom,
I'm attaching the verbose logs captured while running:
 '/usr/sbin/crm_resource --refresh --resource galera-bundle -VVVVVV'

Comment 2 Damien Ciabrini 2018-11-21 13:39:15 UTC
Created attachment 1507691 [details]
log of crm_resource --refresh

Comment 3 Damien Ciabrini 2018-11-21 13:40:09 UTC
Created attachment 1507692 [details]
cat logs | grep -v -e get_xpath_object > logs2

Comment 4 Damien Ciabrini 2018-11-21 13:41:20 UTC
Created attachment 1507697 [details]
galera config for all cluster nodes

Comment 5 Damien Ciabrini 2018-11-21 13:47:37 UTC
And... quick correction to the description: the replicas don't restart right after the "refresh", they restart after the "manage"... sorry for the confusion

Comment 6 Ken Gaillot 2018-12-21 18:29:00 UTC
This is a regression due to the application of a downstream-only patch changing effect between the pacemaker versions that 7.5 and 7.6 are based on. start_mainloop() isn't being called because it isn't there. The patch will need to be updated for the 7.6 base.

Comment 11 errata-xmlrpc 2019-08-06 12:53:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2129


Note You need to log in before you can comment on or make changes to this bug.