RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1293959 - The showed status of the resource does not equal the target-role
Summary: The showed status of the resource does not equal the target-role
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.2
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-23 17:24 UTC by Raoul Scarazzini
Modified: 2016-02-02 14:39 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-02 14:39:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Raoul Scarazzini 2015-12-23 17:24:42 UTC
While playing with our ospd-ha 8 environment we got into this strange problem. We set the nova-compute-clone resource to disabled. E.g. see the following "target-role":

 Clone: nova-compute-clone
  Meta Attrs: interleave=true target-role=Stopped 
  Resource: nova-compute (class=ocf provider=openstack type=NovaCompute)
   Attributes: auth_url=http://172.20.0.10:5000/v2.0/ username=admin password=KAcGkxF6Nkw2AgEFJ8yUqEQu2 tenant_name=admin domain=localdomain 
   Operations: stop interval=0s timeout=300 (nova-compute-stop-interval-0s)

Then we did a cleanup of nova-compute-clone in order to remove some of the failed actions associated with the resource that took place before disabling the resource.
After the cleanup we were surprised to observe the following:

 Clone Set: nova-compute-clone [nova-compute]
     Started: [ overcloud-novacompute-2 ]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 overcloud-novacompute-0 overcloud-novacompute-1 overcloud-novacompute-3 ]

Eventually, after a bit more time, all nova-compute-clone resources became stopped.
But in our opinion it should never have even started one on novacompute-2 because
the nova-compute-clone resource was disabled.

We upload corosync.log (http://file.rdu.redhat.com/~rscarazz/scale-lab-controller-2-corosync.20151223.log.bz2) from the DC of the 3-node OSP cluster (+ 4 remoted). We do not have a very exact timing of the events as we did a lot of different tests today. The issue happened today after 08:50 EST.

We will try to add more details, should we spot this issue again.

Comment 4 Andrew Beekhof 2016-01-11 00:39:10 UTC
Highly unlikely that we would have started it.
More likely, we found it running (ie. it hadn't been stopped prior to the cleanup being  run).

Comment 5 Ken Gaillot 2016-01-18 22:42:45 UTC
It does look like nova-compute was somehow started outside cluster control.

I see target-role set to Stopped multiple times (04:17:54, 05:32:09, 05:36:27, 05:49:28, and 06:45:25).

After the last one, the cluster correctly ensures nova-compute is stopped on all compute nodes.

It appears that cleanup was run at 11:04:44. The cluster initiates probes (one-time monitor operations) on all 4 compute nodes, finds nova-compute running on overcloud-novacompute-1 and overcloud-novacompute-2, and stops them.

This implies that the service was started outside cluster control sometime between 06:45:25 and 11:04:44. (Pacemaker won't run the regular recurring monitor while the service is stopped, although it is possible to configure a recurring monitor for target-role=Stopped exactly for the purpose of catching cases like this sooner.)

If you want me to investigate further on the pacemaker side, let me know, but I think the issue is likely elsewhere.

Comment 6 Raoul Scarazzini 2016-02-02 09:01:05 UTC
Ok, further investigations on this issue also ended up in additional problems with the resource agent, so I think that this bug can be closed, also because that lab is not available anymore.


Note You need to log in before you can comment on or make changes to this bug.