Bug 1692786 - Peers are not disconnecting after gluster cleanup using playbook.
Summary: Peers are not disconnecting after gluster cleanup using playbook.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
high
low
Target Milestone: ---
: RHGS 3.4.z Async Update
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1692782
TreeView+ depends on / blocked
 
Reported: 2019-03-26 12:23 UTC by Mugdha Soni
Modified: 2019-12-05 08:45 UTC (History)
8 users (show)

Fixed In Version: gluster-ansible-roles-1.0.5-2
Doc Type: Bug Fix
Doc Text:
Previously the gluster_peer module did not detach peers correctly during cleanup, so peers remained connected and the cleanup playbook failed as a result. The shell module is now used to collect the list of peers and detach them correctly so that cleanup succeeds.
Clone Of: 1692782
Environment:
Last Closed: 2019-10-03 07:58:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2557 0 None None None 2019-10-03 07:58:34 UTC

Description Mugdha Soni 2019-03-26 12:23:06 UTC
+++ This bug was initially created as a clone of Bug #1692782 +++

Description of problem:
-------------------------
The gluster_cleanup.yml playbook doesnot detach the gluster peer's probed while the gluster deployment.

Version-Release number of selected component:
----------------------------------------------
gluster-ansible-repositories-1.0-1.el7rhgs.noarch
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-features-1.0.4-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch
gluster-ansible-roles-1.0.4-4.el7rhgs.noarch
gluster-ansible-infra-1.0.3-3.el7rhgs.noarch

rhvh-4.3.0.5-0.20190313

glusterfs-server-3.12.2-47.el7rhgs

How reproducible:
-------------------
Everytime

Steps to Reproduce:
--------------------
1.After the successful gluster deployment , Cleanup the cluster via gluster_cleanup.yml playbook. 
2.Go to the cli and run the command "gluster peer status " and user will see the previously probed peers


Actual results:
------------------
The peers do not detach .

Expected results:
--------------------
The peers should detach after the cleanup playbook .This sometimes leads to failure in deployment.

Comment 2 SATHEESARAN 2019-03-27 07:18:03 UTC
The cleanup playbook provided by gluster-ansible-roles need to have steps to remove the gluster cluster as well

Comment 3 Gobinda Das 2019-03-28 12:45:44 UTC
Raised PR: https://github.com/gluster/gluster-ansible/pull/66

Comment 7 bipin 2019-05-17 07:15:04 UTC
TASK [Delete a node from the trusted storage pool] ****************************************************************************************************************************************************************
task path: /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_cleanup.yml:76
<rhsqa-grafton2.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton2.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/db560400cd rhsqa-grafton2.lab.eng.blr.redhat.com '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<rhsqa-grafton1.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton1.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/be6e0acc96 rhsqa-grafton1.lab.eng.blr.redhat.com '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<rhsqa-grafton3.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton3.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/9849af7c22 rhsqa-grafton3.lab.eng.blr.redhat.com '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<rhsqa-grafton3.lab.eng.blr.redhat.com> (0, '/root\n', '')
<rhsqa-grafton3.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton1.lab.eng.blr.redhat.com> (0, '/root\n', '')
<rhsqa-grafton3.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/9849af7c22 rhsqa-grafton3.lab.eng.blr.redhat.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1558069928.39-272588063885669 `" && echo ansible-tmp-1558069928.39-272588063885669="` echo /root/.ansible/tmp/ansible-tmp-1558069928.39-272588063885669 `" ) && sleep 0'"'"''
<rhsqa-grafton1.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton1.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/be6e0acc96 rhsqa-grafton1.lab.eng.blr.redhat.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1558069928.38-233831883893890 `" && echo ansible-tmp-1558069928.38-233831883893890="` echo /root/.ansible/tmp/ansible-tmp-1558069928.38-233831883893890 `" ) && sleep 0'"'"''
<rhsqa-grafton2.lab.eng.blr.redhat.com> (0, '/root\n', '')
<rhsqa-grafton2.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton2.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/db560400cd rhsqa-grafton2.lab.eng.blr.redhat.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1558069928.38-2639785096577 `" && echo ansible-tmp-1558069928.38-2639785096577="` echo /root/.ansible/tmp/ansible-tmp-1558069928.38-2639785096577 `" ) && sleep 0'"'"''
<rhsqa-grafton1.lab.eng.blr.redhat.com> (0, 'ansible-tmp-1558069928.38-233831883893890=/root/.ansible/tmp/ansible-tmp-1558069928.38-233831883893890\n', '')
<rhsqa-grafton3.lab.eng.blr.redhat.com> (0, 'ansible-tmp-1558069928.39-272588063885669=/root/.ansible/tmp/ansible-tmp-1558069928.39-272588063885669\n', '')
<rhsqa-grafton2.lab.eng.blr.redhat.com> (0, 'ansible-tmp-1558069928.38-2639785096577=/root/.ansible/tmp/ansible-tmp-1558069928.38-2639785096577\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/storage/glusterfs/gluster_peer.py
Using module file /usr/lib/python2.7/site-packages/ansible/modules/storage/glusterfs/gluster_peer.py
Using module file /usr/lib/python2.7/site-packages/ansible/modules/storage/glusterfs/gluster_peer.py
<rhsqa-grafton1.lab.eng.blr.redhat.com> PUT /root/.ansible/tmp/ansible-local-31711IPCwRq/tmp4xnSjt TO /root/.ansible/tmp/ansible-tmp-1558069928.38-233831883893890/AnsiballZ_gluster_peer.py
<rhsqa-grafton2.lab.eng.blr.redhat.com> PUT /root/.ansible/tmp/ansible-local-31711IPCwRq/tmpVWMokp TO /root/.ansible/tmp/ansible-tmp-1558069928.38-2639785096577/AnsiballZ_gluster_peer.py
<rhsqa-grafton3.lab.eng.blr.redhat.com> PUT /root/.ansible/tmp/ansible-local-31711IPCwRq/tmpiBHwLX TO /root/.ansible/tmp/ansible-tmp-1558069928.39-272588063885669/AnsiballZ_gluster_peer.py
<rhsqa-grafton1.lab.eng.blr.redhat.com> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/be6e0acc96 '[rhsqa-grafton1.lab.eng.blr.redhat.com]'
<rhsqa-grafton2.lab.eng.blr.redhat.com> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/db560400cd '[rhsqa-grafton2.lab.eng.blr.redhat.com]'
<rhsqa-grafton3.lab.eng.blr.redhat.com> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/9849af7c22 '[rhsqa-grafton3.lab.eng.blr.redhat.com]'
<rhsqa-grafton1.lab.eng.blr.redhat.com> (0, 'sftp> put /root/.ansible/tmp/ansible-local-31711IPCwRq/tmp4xnSjt /root/.ansible/tmp/ansible-tmp-1558069928.38-233831883893890/AnsiballZ_gluster_peer.py\n', '')
<rhsqa-grafton1.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton1.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/be6e0acc96 rhsqa-grafton1.lab.eng.blr.redhat.com '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1558069928.38-233831883893890/ /root/.ansible/tmp/ansible-tmp-1558069928.38-233831883893890/AnsiballZ_gluster_peer.py && sleep 0'"'"''
<rhsqa-grafton3.lab.eng.blr.redhat.com> (0, 'sftp> put /root/.ansible/tmp/ansible-local-31711IPCwRq/tmpiBHwLX /root/.ansible/tmp/ansible-tmp-1558069928.39-272588063885669/AnsiballZ_gluster_peer.py\n', '')
<rhsqa-grafton3.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton3.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/9849af7c22 rhsqa-grafton3.lab.eng.blr.redhat.com '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1558069928.39-272588063885669/ /root/.ansible/tmp/ansible-tmp-1558069928.39-272588063885669/AnsiballZ_gluster_peer.py && sleep 0'"'"''
<rhsqa-grafton2.lab.eng.blr.redhat.com> (0, 'sftp> put /root/.ansible/tmp/ansible-local-31711IPCwRq/tmpVWMokp /root/.ansible/tmp/ansible-tmp-1558069928.38-2639785096577/AnsiballZ_gluster_peer.py\n', '')
<rhsqa-grafton2.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton2.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/db560400cd rhsqa-grafton2.lab.eng.blr.redhat.com '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1558069928.38-2639785096577/ /root/.ansible/tmp/ansible-tmp-1558069928.38-2639785096577/AnsiballZ_gluster_peer.py && sleep 0'"'"''
<rhsqa-grafton1.lab.eng.blr.redhat.com> (0, '', '')
<rhsqa-grafton1.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton1.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/be6e0acc96 -tt rhsqa-grafton1.lab.eng.blr.redhat.com '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1558069928.38-233831883893890/AnsiballZ_gluster_peer.py && sleep 0'"'"''
<rhsqa-grafton3.lab.eng.blr.redhat.com> (0, '', '')
<rhsqa-grafton3.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton3.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/9849af7c22 -tt rhsqa-grafton3.lab.eng.blr.redhat.com '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1558069928.39-272588063885669/AnsiballZ_gluster_peer.py && sleep 0'"'"''
<rhsqa-grafton2.lab.eng.blr.redhat.com> (0, '', '')
<rhsqa-grafton2.lab.eng.blr.redhat.com> ESTABLISH SSH CONNECTION FOR USER: root
<rhsqa-grafton2.lab.eng.blr.redhat.com> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/db560400cd -tt rhsqa-grafton2.lab.eng.blr.redhat.com '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1558069928.38-2639785096577/AnsiballZ_gluster_peer.py && sleep 0'"'"''

Comment 8 Sachidananda Urs 2019-05-17 09:24:51 UTC
Bipin, looks like something is wrong with the environment, with the details you have posted I can't make out what the problem is.
Can you please check if passwordless ssh works. And also, can you try with Ansible 2.8 as well?

Comment 10 Sachidananda Urs 2019-05-21 07:11:20 UTC
(In reply to Sachidananda Urs from comment #9)
> PR: https://github.com/gluster/gluster-ansible/pull/72
> Commit:
> https://github.com/gluster/gluster-ansible/pull/72/commits/0552b0f118e71

Ignore the above PR, new PR:
https://github.com/gluster/gluster-ansible/pull/67

Comment 12 SATHEESARAN 2019-06-26 07:21:05 UTC
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1
with:
gluster-ansible-features-1.0.5-2.el7rhgs.noarch
gluster-ansible-roles-1.0.5-2.el7rhgs.noarch
gluster-ansible-infra-1.0.4-3.el7rhgs.noarch

Post running cleanup playbook, it was observed that all the peers from that node are detached

<snip>
TASK [Get the list of hosts to be detached] ***********************************************************************************************************************************************************************
changed: [host2.lab.eng.blr.redhat.com]

TASK [Delete a node from the trusted storage pool] ****************************************************************************************************************************************************************
changed: [host2.lab.eng.blr.redhat.com] => (item=host1.lab.eng.blr.redhat.com)
skipping: [host2.lab.eng.blr.redhat.com] => (item=host2.lab.eng.blr.redhat.com) 
changed: [host2.lab.eng.blr.redhat.com] => (item=host3.lab.eng.blr.redhat.com)
</snip>

[root@host2 ~]# gluster pe s
Number of Peers: 0

Comment 16 errata-xmlrpc 2019-10-03 07:58:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2557


Note You need to log in before you can comment on or make changes to this bug.