RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1191272 - pcs resource enable --wait not working with Filesystem clones
Summary: pcs resource enable --wait not working with Filesystem clones
Keywords:
Status: CLOSED DUPLICATE of bug 1188571
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-10 22:00 UTC by Nate Straz
Modified: 2015-03-26 10:11 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: User starts a clone resource and instructs pcs to wait for the resource to start. Consequence: As soon as one resource instance starts on a node pcs exits and reports success. Fix: Wait for all resource instances to start. Result: Pcs waits for all resource instances to start and then reports success / error.
Clone Of:
Environment:
Last Closed: 2015-03-26 10:11:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1188571 0 high CLOSED The --wait functionality implementation needs an overhaul 2021-02-22 00:41:40 UTC

Internal Links: 1188571

Description Nate Straz 2015-02-10 22:00:48 UTC
Description of problem:

When starting a cloned Filesystem resource under pacemaker (GFS2 file system), the resource enable command does not wait for the file system to mount on all nodes.

Here is an instrumented test script which creates a GFS2 file system and adds it to pacemaker.  After the script finishes, pcs resource is run on a node.


making the filesystems on host-057...

Adding /dev/STSRHTS26642/skeet0 to pacemaker
pre-enable: Tue Feb 10 15:42:08 2015
post-enable: Tue Feb 10 15:42:12 2015
 Clone Set: dlm-clone [dlm]
     Started: [ host-057 host-058 host-059 host-060 host-061 ]
 Clone Set: clvmd-clone [clvmd]
     Started: [ host-057 host-058 host-059 host-060 host-061 ]
 Clone Set: skeet0-clone [skeet0]
     Started: [ host-057 host-058 host-059 host-061 ]
     Stopped: [ host-060 ]

The commands from the script:

Feb 10 15:42:07 host-057 qarshd[32093]: Running cmdline: mktemp
Feb 10 15:42:07 host-057 qarshd[32098]: Running cmdline: pcs cluster cib > /tmp/tmp.8T01hzl3gP
Feb 10 15:42:07 host-057 qarshd[32104]: Running cmdline: pcs -f /tmp/tmp.8T01hzl3gP resource create skeet0 Filesystem device="/dev/STSRHTS26642/skeet0" directory="/mnt/skeet0" fstype="gfs2" options="errors=panic" op monitor interval=10s on-fail=fence clone interleave=true
Feb 10 15:42:07 host-057 qarshd[32130]: Running cmdline: pcs -f /tmp/tmp.8T01hzl3gP constraint order start clvmd-clone then skeet0-clone
Feb 10 15:42:08 host-057 qarshd[32137]: Running cmdline: pcs -f /tmp/tmp.8T01hzl3gP constraint colocation add skeet0-clone with clvmd-clone
Feb 10 15:42:08 host-057 qarshd[32143]: Running cmdline: pcs cluster cib-push /tmp/tmp.8T01hzl3gP
Feb 10 15:42:08 host-057 qarshd[32172]: Running cmdline: pcs resource enable skeet0-clone --wait=30
Feb 10 15:42:12 host-057 qarshd[32332]: Running cmdline: rm -f /tmp/tmp.8T01hzl3gP
Feb 10 15:42:12 host-057 qarshd[32363]: Running cmdline: pcs resource


Version-Release number of selected component (if applicable):
pcs-0.9.137-13.el7.x86_64

How reproducible:
Easily

Steps to Reproduce:
1. create and clone a Filesystem resource
2. pcs resource enable --wait=N
3. pcs resource

Actual results:

The file system is not mounted on all nodes immediately after the script completes.

Expected results:

pcs resource enable --wait should wait for all nodes to enable the resource.

Additional info:

Comment 2 Tomas Jelinek 2015-03-17 14:54:22 UTC
Patch in upstream:
https://github.com/feist/pcs/commit/900274a059fe6dd6ffef1ac84d7f67c92a392f33


Test:
[root@rh70-node1:~]# pcs resource create delay0 delay startdelay=3 --clone --wait && pcs resource
Resource 'delay0' is running on nodes rh70-node1, rh70-node2, rh70-node3.
 Clone Set: delay0-clone [delay0]
     Started: [ rh70-node1 rh70-node2 rh70-node3 ]


[root@rh70-node1:~]# pcs resource create delay0 delay startdelay=3 --clone --disabled
[root@rh70-node1:~]# pcs resource
 Clone Set: delay0-clone [delay0]
     Stopped: [ rh70-node1 rh70-node2 rh70-node3 ]
[root@rh70-node1:~]# pcs resource enable delay0 --wait && pcs resource
Resource 'delay0' is running on nodes rh70-node1, rh70-node2, rh70-node3.
 Clone Set: delay0-clone [delay0]
     Started: [ rh70-node1 rh70-node2 rh70-node3 ]


Note You need to log in before you can comment on or make changes to this bug.