Bug 1692782
Summary: | Peers are not disconnecting after gluster cleanup using playbook. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Mugdha Soni <musoni> | |
Component: | rhhi | Assignee: | Sahina Bose <sabose> | |
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> | |
Severity: | low | Docs Contact: | ||
Priority: | medium | |||
Version: | rhhiv-1.6 | CC: | bshetty, godas, pasik, rhs-bugs | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHHI-V 1.6.z Async Update | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
Previously the gluster_peer module did not detach peers correctly during cleanup, so peers remained connected and the cleanup playbook failed as a result. The shell module is now used to collect the list of peers and detach them correctly so that cleanup succeeds.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1692786 (view as bug list) | Environment: | ||
Last Closed: | 2019-10-03 12:23:57 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1692786 | |||
Bug Blocks: |
Description
Mugdha Soni
2019-03-26 12:20:41 UTC
Moving the bug to ON_QA based on the status of the base bug. Upadte based on Base bug ------------------------ FailQA'ing the bug since the playbook was stuck in 'TASK [Delete a node from the trusted storage pool]' for hour n so. Since there was a warning added for gluster peer detach in RHGS 3.4.2 , the ansible script should use --mode=script to run. Thanks Sas for the info. Components: ========== glusterfs-6.0-2.el7rhgs.x86_64 gluster-ansible-roles-1.0.5-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-1.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch gluster-ansible-features-1.0.5-1.el7rhgs.noarch Steps: ===== 1. Complete the gluster deployment 2. Run the gluster_cleanup.yml 3. The ansible run gets stuck for long in 'TASK [Delete a node from the trusted storage pool]'. Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1 with: gluster-ansible-features-1.0.5-2.el7rhgs.noarch gluster-ansible-roles-1.0.5-2.el7rhgs.noarch gluster-ansible-infra-1.0.4-3.el7rhgs.noarch Post running cleanup playbook, it was observed that all the peers from that node are detached <snip> TASK [Get the list of hosts to be detached] *********************************************************************************************************************************************************************** changed: [host2.lab.eng.blr.redhat.com] TASK [Delete a node from the trusted storage pool] **************************************************************************************************************************************************************** changed: [host2.lab.eng.blr.redhat.com] => (item=host1.lab.eng.blr.redhat.com) skipping: [host2.lab.eng.blr.redhat.com] => (item=host2.lab.eng.blr.redhat.com) changed: [host2.lab.eng.blr.redhat.com] => (item=host3.lab.eng.blr.redhat.com) </snip> [root@host2 ~]# gluster pe s Number of Peers: 0 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2963 |