Bug 1641945
| Summary: | [UPGRADES][10]RMQ resource-agent should handle stopped node [rhel-7.5.z] | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Oneata Mircea Teodor <toneata> |
| Component: | resource-agents | Assignee: | Oyvind Albrigtsen <oalbrigt> |
| Status: | CLOSED ERRATA | QA Contact: | Marian Krcmarik <mkrcmari> |
| Severity: | urgent | Docs Contact: | Marek Suchánek <msuchane> |
| Priority: | urgent | ||
| Version: | 7.5 | CC: | agk, aherr, apevec, augol, ccamacho, cchen, cfeist, cluster-maint, ctowsley, fdinitto, jeckersb, lhh, lmiccini, michele, mjuricek, mkrcmari, morazi, oalbrigt, pkomarov, rscarazz, sbradley, sgolovat, srevivo, toneata, yprokule |
| Target Milestone: | rc | Keywords: | ReleaseNotes, Triaged, ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | resource-agents-3.9.5-124.el7_5.1 | Doc Type: | Bug Fix |
| Doc Text: |
Previously, the rabbitmqctl cluster_status command read cached cluster status from disk and returned 0 when mnesia service was not running. For example, this happened if rabbitmqctl stop_app was called, or the service paused during partition due to the pause_minority strategy. As a consequence, RabbitMQ might have returned cached status from disk. With this update, RabbitMQ now gets cluster status from mnesia during monitor-action. As a result, the described problem no longer occurs.
|
Story Points: | --- |
| Clone Of: | 1595753 | Environment: | |
| Last Closed: | 2018-11-06 16:16:37 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1595753 | ||
| Bug Blocks: | |||
|
Description
Oneata Mircea Teodor
2018-10-23 08:30:37 UTC
Verified, Tested on an upgrade frpm osp9->10 [stack@undercloud-0 ~]$ rhos-release -L Installed repositories (rhel-7.5): 10 ceph-2 ceph-osd-2 rhel-7.5 [stack@undercloud-0 ~]$ tail convergence.log 2018-10-25 14:30:01Z [overcloud-UpdateWorkflow-ygrirsvvspks-CephMonUpgradeDeployment-z2orrnzhs2q3.0]: DELETE_IN_PROGRESS state changed 2018-10-25 14:30:01Z [overcloud-UpdateWorkflow-ygrirsvvspks-CephMonUpgradeDeployment-z2orrnzhs2q3.2]: DELETE_IN_PROGRESS state changed 2018-10-25 14:30:02Z [overcloud-UpdateWorkflow-ygrirsvvspks.ControllerPacemakerUpgradeConfig_Step0]: DELETE_COMPLETE state changed 2018-10-25 14:30:03Z [overcloud-UpdateWorkflow-ygrirsvvspks-CephMonUpgradeDeployment-z2orrnzhs2q3.1]: DELETE_COMPLETE state changed 2018-10-25 14:30:22Z [overcloud]: UPDATE_COMPLETE Stack UPDATE completed successfully Stack overcloud UPDATE_COMPLETE Overcloud Endpoint: http://10.0.0.101:5000/v2.0 Overcloud Deployed #validated tester from BZ1595753 [stack@undercloud-0 ~]$ ansible controller -mshell -b -a' rabbitmqctl eval "rabbit_mnesia:cluster_status_from_mnesia()."' [WARNING]: Found both group and host with same name: undercloud controller-0 | SUCCESS | rc=0 >> {ok,{['rabbit@controller-1','rabbit@controller-2','rabbit@controller-0'], ['rabbit@controller-0','rabbit@controller-1','rabbit@controller-2'], ['rabbit@controller-1','rabbit@controller-2','rabbit@controller-0']}} controller-2 | SUCCESS | rc=0 >> {ok,{['rabbit@controller-1','rabbit@controller-2','rabbit@controller-0'], ['rabbit@controller-0','rabbit@controller-1','rabbit@controller-2'], ['rabbit@controller-1','rabbit@controller-0','rabbit@controller-2']}} controller-1 | SUCCESS | rc=0 >> {ok,{['rabbit@controller-1','rabbit@controller-2','rabbit@controller-0'], ['rabbit@controller-0','rabbit@controller-1','rabbit@controller-2'], ['rabbit@controller-0','rabbit@controller-2','rabbit@controller-1']}} [stack@undercloud-0 ~]$ ansible controller -mshell -b -a'rpm -qa|grep resource-agents' [WARNING]: Found both group and host with same name: undercloud [WARNING]: Consider using yum, dnf or zypper module rather than running rpm controller-0 | SUCCESS | rc=0 >> resource-agents-3.9.5-124.el7_5.1.x86_64 controller-2 | SUCCESS | rc=0 >> resource-agents-3.9.5-124.el7_5.1.x86_64 controller-1 | SUCCESS | rc=0 >> resource-agents-3.9.5-124.el7_5.1.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3513 |