RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1524454 - when upgrading from 7.2 to 7.4 some resource agents will have NODENAME set empty [rhel-7.4.z]
Summary: when upgrading from 7.2 to 7.4 some resource agents will have NODENAME set em...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents
Version: 7.4
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Oyvind Albrigtsen
QA Contact: pkomarov
URL:
Whiteboard:
Depends On: 1520574
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-11 14:48 UTC by Oneata Mircea Teodor
Modified: 2021-03-11 16:36 UTC (History)
20 users (show)

Fixed In Version: resource-agents-3.9.5-105.el7_4.6
Doc Type: If docs needed, set a value
Doc Text:
Previously, the galera, redis, and rabbitmq-cluster resource agents were unable to start non-containerized resources when a recent version of the resource agent ran on a Pacemaker version that did not support bundles. With this update, a fallback path for non-containerized resources has been added, and, as a result, the described problem no longer occurs.
Clone Of: 1520574
Environment:
Last Closed: 2018-01-25 11:57:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ClusterLabs resource-agents pull 1066 0 'None' closed Fix fallback name for ocf_attribute_target 2020-04-30 00:26:44 UTC
Red Hat Product Errata RHBA-2018:0154 0 normal SHIPPED_LIVE resource-agents bug fix update 2018-01-25 16:19:11 UTC

Description Oneata Mircea Teodor 2017-12-11 14:48:40 UTC
This bug has been copied from bug #1520574 and has been proposed to be backported to 7.4 z-stream (EUS).

Comment 3 Dana Safford 2017-12-12 18:28:26 UTC
mburns confirned that Fidelity will upgrade to RHEL 7.4z before the 13 JAN 18 OSP8 > 10 upgrade begins. So we don't need the hotfix for this issue (for Fidelity at least).

Comment 4 Fabio Massimo Di Nitto 2017-12-12 19:05:09 UTC
(In reply to Dana Safford from comment #3)
> mburns confirned that Fidelity will upgrade to RHEL 7.4z before the 13 JAN
> 18 OSP8 > 10 upgrade begins. So we don't need the hotfix for this issue (for
> Fidelity at least).

This is exactly when they might hit the issue again. It´s best to have this resource-agents installed during the same upgrade from 7.2 to 7.4.z to avoid any potential problem.

Comment 5 Chris Feist 2017-12-12 19:57:56 UTC
(In reply to Dana Safford from comment #3)
> mburns confirned that Fidelity will upgrade to RHEL 7.4z before the 13 JAN
> 18 OSP8 > 10 upgrade begins. So we don't need the hotfix for this issue (for
> Fidelity at least).

Dana, I just want to clarify your comment.

1.  Fidelity is going to upgrade to 7.4.z before the next z-stream release (13-Jan-18).  This means they're going to use the current resource-agents package which does have this issue.

2.  If Fidelity is going to upgrade before that date, we definitely need to look at providing them with a hotfix (otherwise they could break going from 7.2 -> 7.4 from the old package).

Comment 6 Dana Safford 2017-12-12 20:21:58 UTC
Chris,

Thanks for the adjustment.

I just finished talking with the Fidelity folks. They changed positions and think they will not upgrade to RHEL 7.4z before 13 JAN 2018.

They would like to have the hotfix to test before the 13 JAN 2018 date.

Thanks,

Comment 14 Damien Ciabrini 2018-01-04 22:56:13 UTC
Instruction for testing:

. Deploy a OSP 11 HA environment, and make sure it has a recent enough version of resource-agents:
   resource-agents-3.9.5-105.el7_4.3.x86_64

. Download on the undercloud all the pacemaker packages from an old enough version, say 7.2.z:
   pacemaker-cts-1.1.13-10.el7_2.4.x86_64
   pcs-0.9.143-15.el7_2.1.x86_64
   pacemaker-remote-1.1.13-10.el7_2.4.x86_64
   pacemaker-libs-1.1.13-10.el7_2.4.x86_64
   pacemaker-cluster-libs-1.1.13-10.el7_2.4.x86_64
   pacemaker-1.1.13-10.el7_2.4.x86_64
   pacemaker-cli-1.1.13-10.el7_2.4.x86_64

. Dump a copy of the CIB to record all the resources running. We'll use it to recreate a cluster with pacemaker 7.2.z packages. From controller-0:
   pcs cluster cib > /tmp/cib.xml

. Delete the cluster on all the overcloud nodes. From controller-0, do:
   pcs cluster stop --all
   pcs cluster destroy --all

. Downgrade the pacemaker packages on all the controller nodes. From the undercloud:
  for i in ctrl0 ctrl1 ctrl2; do scp *.rpm heat-admin@$i:/tmp; done
  for i in ctrl0 ctrl1 ctrl2; do ssh heat-admin@$i "sudo rpm -Uvh --oldpackage /tmp/*.rpm"; done

. Recreate an empty cluster. From the undercloud:
   pcs cluster setup --force --name tripleo_cluster overcloud-controller-0 overcloud-controller-1 overcloud-controller-2
   pcs cluster start --all

. Once the new cluster settled, repopulate it with all the resource definitions from the original CIB. From controller-0:

pcs property set stonith-enabled=false
xmllint --xpath '//nodes' /tmp/cib.xml | /usr/sbin/cibadmin --replace -V --xml-pipe -o nodes
xmllint --xpath '//resources' /tmp/cib.xml | /usr/sbin/cibadmin --replace -V --xml-pipe -o resources

. Wait for all the resource to be started, and notice that galera, rabbit and redis won't start completely, with errors logged in the journal:
   Jan 04 18:04:10 overcloud-controller-0 lrmd[206343]:   notice: redis_monitor_60000:222172:stderr [ Could not map name=-l to a UUID ]
   Jan 04 18:05:58 overcloud-controller-0 lrmd[206343]:   notice: rabbitmq_monitor_10000:228814:stderr [ Could not map name=-l to a UUID ]

. Stop the non working resources and clean up bad state. From controller-0:

   pcs resource disable rabbitmq-clone
   pcs resource disable redis-master
   pcs resource disable galera-master
   pcs resource cleanup galera-master
     
. Install the new resource-agents-3.9.5-105.el7_4.6 on all the overcloud nodes

. Restart the resources. From controller-0:
   pcs resource enable rabbitmq-clone
   pcs resource enable redis-master
   pcs resource enable galera-master

Galera and the other cloned resources should start properly, with no more logs in the journal.

Comment 15 pkomarov 2018-01-08 20:07:50 UTC
Tested , and verified , 

Following the above procedure, after updating to resource-agents-3.9.5-105.el7_4.6 on the controllers, all resources : redis galera rabbitmq were promoted to ACTIVE state.

Comment 20 errata-xmlrpc 2018-01-25 11:57:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0154


Note You need to log in before you can comment on or make changes to this bug.