Bug 1692782 - Peers are not disconnecting after gluster cleanup using playbook.
Summary: Peers are not disconnecting after gluster cleanup using playbook.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.6
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
: RHHI-V 1.6.z Async Update
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1692786
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-26 12:20 UTC by Mugdha Soni
Modified: 2019-10-03 12:24 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously the gluster_peer module did not detach peers correctly during cleanup, so peers remained connected and the cleanup playbook failed as a result. The shell module is now used to collect the list of peers and detach them correctly so that cleanup succeeds.
Clone Of:
: 1692786 (view as bug list)
Environment:
Last Closed: 2019-10-03 12:23:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2963 0 None None None 2019-10-03 12:24:06 UTC

Description Mugdha Soni 2019-03-26 12:20:41 UTC
Description of problem:
-------------------------
The gluster_cleanup.yml playbook doesnot detach the gluster peer's probed while the gluster deployment.

Version-Release number of selected component:
----------------------------------------------
gluster-ansible-repositories-1.0-1.el7rhgs.noarch
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-features-1.0.4-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch
gluster-ansible-roles-1.0.4-4.el7rhgs.noarch
gluster-ansible-infra-1.0.3-3.el7rhgs.noarch

rhvh-4.3.0.5-0.20190313

glusterfs-server-3.12.2-47.el7rhgs

How reproducible:
-------------------
Everytime

Steps to Reproduce:
--------------------
1.After the successful gluster deployment , Cleanup the cluster via gluster_cleanup.yml playbook. 
2.Go to the cli and run the command "gluster peer status " and user will see the previously probed peers


Actual results:
------------------
The peers do not detach .

Expected results:
--------------------
The peers should detach after the cleanup playbook .This sometimes leads to failure in deployment.

Comment 1 bipin 2019-05-17 05:15:42 UTC
Moving the bug to ON_QA based on the status of the base bug.

Comment 2 bipin 2019-05-17 07:13:46 UTC
Upadte based on Base bug
------------------------

FailQA'ing the bug since the playbook was stuck in 'TASK [Delete a node from the trusted storage pool]' for hour n so.
Since there was a warning added for gluster peer detach in RHGS 3.4.2 , the ansible script should use --mode=script to run. Thanks Sas for the info.

Components:
==========
glusterfs-6.0-2.el7rhgs.x86_64
gluster-ansible-roles-1.0.5-1.el7rhgs.noarch
gluster-ansible-infra-1.0.4-1.el7rhgs.noarch
gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch
gluster-ansible-features-1.0.5-1.el7rhgs.noarch

Steps:
=====
1. Complete the gluster deployment
2. Run the gluster_cleanup.yml
3. The ansible run gets stuck for long in 'TASK [Delete a node from the trusted storage pool]'.

Comment 5 SATHEESARAN 2019-06-26 07:21:54 UTC
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1
with:
gluster-ansible-features-1.0.5-2.el7rhgs.noarch
gluster-ansible-roles-1.0.5-2.el7rhgs.noarch
gluster-ansible-infra-1.0.4-3.el7rhgs.noarch

Post running cleanup playbook, it was observed that all the peers from that node are detached

<snip>
TASK [Get the list of hosts to be detached] ***********************************************************************************************************************************************************************
changed: [host2.lab.eng.blr.redhat.com]

TASK [Delete a node from the trusted storage pool] ****************************************************************************************************************************************************************
changed: [host2.lab.eng.blr.redhat.com] => (item=host1.lab.eng.blr.redhat.com)
skipping: [host2.lab.eng.blr.redhat.com] => (item=host2.lab.eng.blr.redhat.com) 
changed: [host2.lab.eng.blr.redhat.com] => (item=host3.lab.eng.blr.redhat.com)
</snip>

[root@host2 ~]# gluster pe s
Number of Peers: 0

Comment 7 errata-xmlrpc 2019-10-03 12:23:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2963


Note You need to log in before you can comment on or make changes to this bug.