Bug 1634835 - Glusterfs-registry pods also get removed while only glusterfs is being uninstalled
Summary: Glusterfs-registry pods also get removed while only glusterfs is being uninst...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.11.z
Assignee: Jose A. Rivera
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks: 1634837
TreeView+ depends on / blocked
 
Reported: 2018-10-01 18:48 UTC by Neha Berry
Modified: 2018-11-20 03:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1634837 (view as bug list)
Environment:
Last Closed: 2018-11-20 03:10:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3537 0 None None None 2018-11-20 03:11:30 UTC

Description Neha Berry 2018-10-01 18:48:05 UTC
Description of problem:
+++++++++++++++++++++++

In an OCP setup, both glusterfs and glusterfs_registry pods are deployed in the corresponding namespaces. There was a need to uninstall OCS pods from the glusterfs project, keeping the pods in lusterfs_registry intact. Hence, hashed the complete portion of [glusterfs_registry] section and ran the OCS uninstall playbook. 

Observations:
++++++++++++++

1. Complate set of pods are uninstalled from the glusterfs project --->expected
2. even the glusterfs-registry pods from the glustefs_registry project ate terminated ----> not expected
3. Only the heketi and glusterblock prov pods remained in the glusterfs_registry namespace

Some details of pods after uninstall
++++++++++++++++++++++++++++++++++++


glusterfs_registry_namespace=infra-storage  -->supposed to remain intact
----------------------------------------

Pods before uninstall
-------------

# oc get pods -o wide -n infra-storage -w
NAME                                           READY     STATUS    RESTARTS   AGE      
glusterblock-registry-provisioner-dc-1-wszcx   1/1       Running   0          2d        
glusterfs-registry-7xzpt                       1/1       Running   0          2d       
glusterfs-registry-8l8nk                       1/1       Running   0          2d
glusterfs-registry-lfnrt                       1/1       Running   0          2d    
heketi-registry-1-nrdvx                        1/1       Running   0          2d   


Pods after uninstall
-------------------

# oc get pods  -n infra-storage 
NAME                                           READY     STATUS    RESTARTS   AGE
glusterblock-registry-provisioner-dc-1-wszcx   1/1       Running   0          2d
heketi-registry-1-nrdvx                        1/1       Running   0          2d
[root@dhcp47-135 ~]# 




Version-Release number of the following components:
+++++++++++++++++++++++++++++++++++++++++
# rpm -q openshift-ansible
openshift-ansible-3.11.16-1.git.0.4ac6f81.el7.noarch

# rpm -q ansible
ansible-2.6.4-1.el7ae.noarch

# ansible --version
ansible 2.6.4
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

# oc version
oc v3.11.16
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://dhcp47-135.lab.eng.blr.redhat.com:8443
openshift v3.11.16
kubernetes v1.11.0+d4cacc0

# rpm -qa|grep openshift
openshift-ansible-roles-3.11.16-1.git.0.4ac6f81.el7.noarch
atomic-openshift-docker-excluder-3.11.16-1.git.0.b48b8f8.el7.noarch
atomic-openshift-3.11.16-1.git.0.b48b8f8.el7.x86_64
openshift-ansible-playbooks-3.11.16-1.git.0.4ac6f81.el7.noarch
openshift-ansible-3.11.16-1.git.0.4ac6f81.el7.noarch
atomic-openshift-clients-3.11.16-1.git.0.b48b8f8.el7.x86_64
openshift-ansible-docs-3.11.16-1.git.0.4ac6f81.el7.noarch
atomic-openshift-excluder-3.11.16-1.git.0.b48b8f8.el7.noarch
atomic-openshift-hyperkube-3.11.16-1.git.0.b48b8f8.el7.x86_64
atomic-openshift-node-3.11.16-1.git.0.b48b8f8.el7.x86_64
# 


How reproducible:
++++++++++++
2*2

Steps to Reproduce:
1. Install OCP 3.11 with OCS 3.11 (both app-storage and glusterfs-registry)
2. In order to uninstall only 1 component, say glusterfs and keep glusterfs_reg intact, hash out the entries under [glusterfs_registry]
3. Run the uninstall playbook for OCS: ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml -e "openshift_storage_glusterfs_wipe=True"
4. Check the pod status in both the storage namespaces. It is seen that the glusterfs pods are terminated from the glusterfs_reg namespace as well.



Actual results:
+++++++++++++++

Even though glusterfs_reg was hashed out
---------------

# oc get pods -o wide -n infra-storage -w
NAME                                           READY     STATUS    RESTARTS   AGE       
glusterblock-registry-provisioner-dc-1-wszcx   1/1       Running   0          2d       
glusterfs-registry-7xzpt                       1/1       Running   0          2d       
glusterfs-registry-8l8nk                       1/1       Running   0          2d       
glusterfs-registry-lfnrt                       1/1       Running   0          2d       
heketi-registry-1-nrdvx                        1/1       Running   0          2d       
glusterfs-registry-lfnrt   1/1       Terminating   0         2d      
glusterfs-registry-lfnrt   0/1       Terminating   0         2d      
glusterfs-registry-lfnrt   0/1       Terminating   0         2d      
glusterfs-registry-lfnrt   0/1       Terminating   0         2d      
glusterfs-registry-lfnrt   0/1       Terminating   0         2d      
glusterfs-registry-8l8nk   1/1       Terminating   0         2d      
glusterfs-registry-7xzpt   1/1       Terminating   0         2d      
glusterfs-registry-8l8nk   0/1       Terminating   0         2d      
glusterfs-registry-7xzpt   0/1       Terminating   0         2d      



Expected results:
++++++++++++

Only the pods within the app-storage namespace should have been uninstalled.

Additional info:
Please attach logs from ansible-playbook with the -vvv flag - Attached
Attached the inventory file

Comment 3 Jose A. Rivera 2018-10-02 15:47:32 UTC
PR submitted: https://github.com/openshift/openshift-ansible/pull/10300

This is not a release blocker.

Comment 4 N. Harrison Ripps 2018-10-03 18:31:35 UTC
Not a 3.11.0 release blocker; moving to 3.11.z.

Comment 5 Jose A. Rivera 2018-10-09 21:43:21 UTC
PR merged.

Comment 7 Wei Sun 2018-11-06 05:47:37 UTC
Please help check if this bug could be verified,thanks!

Comment 10 errata-xmlrpc 2018-11-20 03:10:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3537


Note You need to log in before you can comment on or make changes to this bug.