RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1303119 - 'pcs cluster cleanup' returns non-zero after couple of iterations
Summary: 'pcs cluster cleanup' returns non-zero after couple of iterations
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: pacemaker
Version: 6.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-29 15:41 UTC by michal novacek
Modified: 2023-09-14 03:16 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-09 19:48:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
pcs cluster report output (1.21 MB, application/x-bzip)
2016-01-29 15:41 UTC, michal novacek
no flags Details
pcs cluster cib (40.01 KB, text/plain)
2016-01-29 15:42 UTC, michal novacek
no flags Details
pcs cluster config (4.70 KB, text/plain)
2016-01-29 15:42 UTC, michal novacek
no flags Details

Description michal novacek 2016-01-29 15:41:27 UTC
Created attachment 1119455 [details]
pcs cluster report output

Description of problem:

I'm having a running cluster with no constraints and no resources. 
I run in iteration create cluster (and constraints), start all resources, run
pcs cleanup, remove all resources. After several (usally tenths) of iteration
'cleanup' returns error.

Version-Release number of selected component (if applicable):
pacemaker-1.1.14-0.4_rc5.el6.x86_64
pcs-0.9.148-1.el6.x86_64

How reproducible: needs about 20 iterations to happen

Steps to Reproduce:
1. run several tenths of iterations of the regression test we use

Actual results: 'pcs resource cleanup --wait' returns non-zero

Expected results: 'pcs resource cleanup --wait' returns zero

Additional info:
I was not able to isolate the exact commands necessary but it is happening
always with our regression (python) tests. I was hoping that you might be able
to have a look in the logs I'm attaching to see whether you can find out why
this is happening.

Comment 1 michal novacek 2016-01-29 15:42:00 UTC
Created attachment 1119456 [details]
pcs cluster cib

Comment 2 michal novacek 2016-01-29 15:42:29 UTC
Created attachment 1119457 [details]
pcs cluster config

Comment 4 Ken Gaillot 2016-02-08 17:52:56 UTC
Michal,

Some questions to help target what's going on:

Do your scripts run "pcs cleanup" without a specific resource name?

Do you have any timestamps from when the cleanup returned nonzero?

Do you know the exact value that was returned?

Do you know whether this occurs on other RHEL versions?

Comment 5 Ken Gaillot 2016-09-26 21:24:29 UTC
Michal,

We have a fairly short time frame to get any fixes into 6.9. We will do a rebase to upstream 1.1.15 for 6.9, which fixes a number of bugs. If you can answer any of the questions in Comment 4, we can investigate further, otherwise we'll have to close this one.

Comment 8 Ken Gaillot 2017-10-09 19:48:13 UTC
Since RHEL 6 is beyond the "Production 2" phase and will only get critical bug fixes, I am closing this without a fix. However, there have been rebases since this report, so it may no longer be an issue.

Comment 9 Red Hat Bugzilla 2023-09-14 03:16:57 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.