Bug 1651311

Summary: OCS uninstall removes /etc/glusterfs and /var/lib/glusterd folders which is causing reinstall to fail
Product: OpenShift Container Platform Reporter: Manisha Saini <msaini>
Component: InstallerAssignee: Jose A. Rivera <jrivera>
Installer sub component: openshift-ansible QA Contact: Manisha Saini <msaini>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: medium    
Priority: unspecified CC: gpei, jrivera
Version: 3.9.0   
Target Milestone: ---   
Target Release: 3.9.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1652809 (view as bug list) Environment:
Last Closed: 2019-08-07 19:16:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1652809    

Description Manisha Saini 2018-11-19 16:27:55 UTC
Description of problem:

While running ansible openshift-glusterfs uninstall, OCS uninstall removes gluster related directories like-

/etc/glusterfs/glusterd.vol
/var/lib/glusterd

As a result,it is causing openshift-glusterfs reinstall to fail due to these missing files and folder.Now if the user wants to reinstall the OCS on these nodes or lets say use these nodes as standalone gluster cluster,then they have to manually copy these files from some other nodes.Else the gluster setup fails

AS part of OCS install,gluster packages are installed on all the nodes thereby creating these working directories which is needed for gluster cluster to work.

[root@dhcp47-67 glusterfs]# ls /etc/glusterfs
ls: cannot access /etc/glusterfs: No such file or directory

[root@dhcp47-67 glusterfs]# ls /var/lib/glusterd
ls: cannot access /var/lib/glusterd: No such file or directory


Version-Release number of selected component (if applicable):

# rpm -q openshift-ansible
openshift-ansible-3.9.43-1.git.0.d0bc600.el7.noarch

# rpm -qa | grep ansible
openshift-ansible-playbooks-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-roles-3.9.43-1.git.0.d0bc600.el7.noarch
ansible-2.4.6.0-1.el7ae.noarch
openshift-ansible-docs-3.9.43-1.git.0.d0bc600.el7.noarch


# oc version
oc v3.9.43
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://dhcp46-169.lab.eng.blr.redhat.com:8443
openshift v3.9.43
kubernetes v1.9.1+a0ce1bc657

# rpm -qa|grep openshift
openshift-ansible-playbooks-3.9.43-1.git.0.d0bc600.el7.noarch
atomic-openshift-master-3.9.43-1.git.0.7ad1066.el7.x86_64
atomic-openshift-utils-3.9.43-1.git.0.d0bc600.el7.noarch
atomic-openshift-docker-excluder-3.9.43-1.git.0.7ad1066.el7.noarch
atomic-openshift-sdn-ovs-3.9.43-1.git.0.7ad1066.el7.x86_64
openshift-ansible-3.9.43-1.git.0.d0bc600.el7.noarch
openshift-ansible-roles-3.9.43-1.git.0.d0bc600.el7.noarch
atomic-openshift-clients-3.9.43-1.git.0.7ad1066.el7.x86_64
atomic-openshift-node-3.9.43-1.git.0.7ad1066.el7.x86_64
openshift-ansible-docs-3.9.43-1.git.0.d0bc600.el7.noarch
atomic-openshift-excluder-3.9.43-1.git.0.7ad1066.el7.noarch
atomic-openshift-3.9.43-1.git.0.7ad1066.el7.x86_64



How reproducible:
2/2


Steps to Reproduce:
1.Install OCP and OCS using ansible playbooks
2. Check if pods are up and running post installation
3. Uninstall openshift-glusterfs using ansible playbooks 

ansible-playbook -vvv -i independent_ocp_ocs_inv /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml

Actual results:

Uninstall openshift-glusterfs removes gluster related folders like:

/etc/glusterfs/
/var/lib/glusterd


Expected results:

It should not remove gluster-related folders instead just cleanup the contents which are needed inside these directories

Additional info:


Description of problem:

Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
ansible --version

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:
Please include the entire output from the last TASK line through the end of output if an error is generated

Expected results:

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 3 Jose A. Rivera 2018-11-19 19:46:20 UTC
Use of the uninstall playbook for external GlusterFS clusters is currently unsupported. That said, I've submitted a PR to try and address this, but I don't have the ability to test it right now. If you could go ahead and run it through its paces in 3.11 that would help move it along.

https://github.com/openshift/openshift-ansible/pull/10724

Comment 4 Yaniv Kaul 2019-04-02 14:16:49 UTC
The PR above is merged, why is this bug on POST? 
What's the next step?

Comment 5 Jose A. Rivera 2019-08-07 19:16:09 UTC
This should already be fixed, closing.