Bug 1524454
| Summary: | when upgrading from 7.2 to 7.4 some resource agents will have NODENAME set empty [rhel-7.4.z] | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Oneata Mircea Teodor <toneata> |
| Component: | resource-agents | Assignee: | Oyvind Albrigtsen <oalbrigt> |
| Status: | CLOSED ERRATA | QA Contact: | pkomarov |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 7.4 | CC: | agk, aherr, arkady_kanevsky, cfeist, chjones, cluster-maint, dciabrin, dsafford, fdinitto, jruemker, mbayer, mjuricek, mschuppe, msuchane, oalbrigt, rmccabe, ssigwald, toneata, ushkalim, whayutin |
| Target Milestone: | rc | Keywords: | Triaged, ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | resource-agents-3.9.5-105.el7_4.6 | Doc Type: | If docs needed, set a value |
| Doc Text: |
Previously, the galera, redis, and rabbitmq-cluster resource agents were unable to start non-containerized resources when a recent version of the resource agent ran on a Pacemaker version that did not support bundles. With this update, a fallback path for non-containerized resources has been added, and, as a result, the described problem no longer occurs.
|
Story Points: | --- |
| Clone Of: | 1520574 | Environment: | |
| Last Closed: | 2018-01-25 11:57:49 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1520574 | ||
| Bug Blocks: | |||
|
Description
Oneata Mircea Teodor
2017-12-11 14:48:40 UTC
mburns confirned that Fidelity will upgrade to RHEL 7.4z before the 13 JAN 18 OSP8 > 10 upgrade begins. So we don't need the hotfix for this issue (for Fidelity at least). (In reply to Dana Safford from comment #3) > mburns confirned that Fidelity will upgrade to RHEL 7.4z before the 13 JAN > 18 OSP8 > 10 upgrade begins. So we don't need the hotfix for this issue (for > Fidelity at least). This is exactly when they might hit the issue again. It´s best to have this resource-agents installed during the same upgrade from 7.2 to 7.4.z to avoid any potential problem. (In reply to Dana Safford from comment #3) > mburns confirned that Fidelity will upgrade to RHEL 7.4z before the 13 JAN > 18 OSP8 > 10 upgrade begins. So we don't need the hotfix for this issue (for > Fidelity at least). Dana, I just want to clarify your comment. 1. Fidelity is going to upgrade to 7.4.z before the next z-stream release (13-Jan-18). This means they're going to use the current resource-agents package which does have this issue. 2. If Fidelity is going to upgrade before that date, we definitely need to look at providing them with a hotfix (otherwise they could break going from 7.2 -> 7.4 from the old package). Chris, Thanks for the adjustment. I just finished talking with the Fidelity folks. They changed positions and think they will not upgrade to RHEL 7.4z before 13 JAN 2018. They would like to have the hotfix to test before the 13 JAN 2018 date. Thanks, Instruction for testing:
. Deploy a OSP 11 HA environment, and make sure it has a recent enough version of resource-agents:
resource-agents-3.9.5-105.el7_4.3.x86_64
. Download on the undercloud all the pacemaker packages from an old enough version, say 7.2.z:
pacemaker-cts-1.1.13-10.el7_2.4.x86_64
pcs-0.9.143-15.el7_2.1.x86_64
pacemaker-remote-1.1.13-10.el7_2.4.x86_64
pacemaker-libs-1.1.13-10.el7_2.4.x86_64
pacemaker-cluster-libs-1.1.13-10.el7_2.4.x86_64
pacemaker-1.1.13-10.el7_2.4.x86_64
pacemaker-cli-1.1.13-10.el7_2.4.x86_64
. Dump a copy of the CIB to record all the resources running. We'll use it to recreate a cluster with pacemaker 7.2.z packages. From controller-0:
pcs cluster cib > /tmp/cib.xml
. Delete the cluster on all the overcloud nodes. From controller-0, do:
pcs cluster stop --all
pcs cluster destroy --all
. Downgrade the pacemaker packages on all the controller nodes. From the undercloud:
for i in ctrl0 ctrl1 ctrl2; do scp *.rpm heat-admin@$i:/tmp; done
for i in ctrl0 ctrl1 ctrl2; do ssh heat-admin@$i "sudo rpm -Uvh --oldpackage /tmp/*.rpm"; done
. Recreate an empty cluster. From the undercloud:
pcs cluster setup --force --name tripleo_cluster overcloud-controller-0 overcloud-controller-1 overcloud-controller-2
pcs cluster start --all
. Once the new cluster settled, repopulate it with all the resource definitions from the original CIB. From controller-0:
pcs property set stonith-enabled=false
xmllint --xpath '//nodes' /tmp/cib.xml | /usr/sbin/cibadmin --replace -V --xml-pipe -o nodes
xmllint --xpath '//resources' /tmp/cib.xml | /usr/sbin/cibadmin --replace -V --xml-pipe -o resources
. Wait for all the resource to be started, and notice that galera, rabbit and redis won't start completely, with errors logged in the journal:
Jan 04 18:04:10 overcloud-controller-0 lrmd[206343]: notice: redis_monitor_60000:222172:stderr [ Could not map name=-l to a UUID ]
Jan 04 18:05:58 overcloud-controller-0 lrmd[206343]: notice: rabbitmq_monitor_10000:228814:stderr [ Could not map name=-l to a UUID ]
. Stop the non working resources and clean up bad state. From controller-0:
pcs resource disable rabbitmq-clone
pcs resource disable redis-master
pcs resource disable galera-master
pcs resource cleanup galera-master
. Install the new resource-agents-3.9.5-105.el7_4.6 on all the overcloud nodes
. Restart the resources. From controller-0:
pcs resource enable rabbitmq-clone
pcs resource enable redis-master
pcs resource enable galera-master
Galera and the other cloned resources should start properly, with no more logs in the journal.
Tested , and verified , Following the above procedure, after updating to resource-agents-3.9.5-105.el7_4.6 on the controllers, all resources : redis galera rabbitmq were promoted to ACTIVE state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0154 |