Description of problem: +++++++++++++++++++++++ In an OCP setup, both glusterfs and glusterfs_registry pods are deployed in the corresponding namespaces. There was a need to uninstall OCS pods from the glusterfs project, keeping the pods in lusterfs_registry intact. Hence, hashed the complete portion of [glusterfs_registry] section and ran the OCS uninstall playbook. Observations: ++++++++++++++ 1. Complate set of pods are uninstalled from the glusterfs project --->expected 2. even the glusterfs-registry pods from the glustefs_registry project ate terminated ----> not expected 3. Only the heketi and glusterblock prov pods remained in the glusterfs_registry namespace Some details of pods after uninstall ++++++++++++++++++++++++++++++++++++ glusterfs_registry_namespace=infra-storage -->supposed to remain intact ---------------------------------------- Pods before uninstall ------------- # oc get pods -o wide -n infra-storage -w NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-wszcx 1/1 Running 0 2d glusterfs-registry-7xzpt 1/1 Running 0 2d glusterfs-registry-8l8nk 1/1 Running 0 2d glusterfs-registry-lfnrt 1/1 Running 0 2d heketi-registry-1-nrdvx 1/1 Running 0 2d Pods after uninstall ------------------- # oc get pods -n infra-storage NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-wszcx 1/1 Running 0 2d heketi-registry-1-nrdvx 1/1 Running 0 2d [root@dhcp47-135 ~]# Version-Release number of the following components: +++++++++++++++++++++++++++++++++++++++++ # rpm -q openshift-ansible openshift-ansible-3.11.16-1.git.0.4ac6f81.el7.noarch # rpm -q ansible ansible-2.6.4-1.el7ae.noarch # ansible --version ansible 2.6.4 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] # oc version oc v3.11.16 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://dhcp47-135.lab.eng.blr.redhat.com:8443 openshift v3.11.16 kubernetes v1.11.0+d4cacc0 # rpm -qa|grep openshift openshift-ansible-roles-3.11.16-1.git.0.4ac6f81.el7.noarch atomic-openshift-docker-excluder-3.11.16-1.git.0.b48b8f8.el7.noarch atomic-openshift-3.11.16-1.git.0.b48b8f8.el7.x86_64 openshift-ansible-playbooks-3.11.16-1.git.0.4ac6f81.el7.noarch openshift-ansible-3.11.16-1.git.0.4ac6f81.el7.noarch atomic-openshift-clients-3.11.16-1.git.0.b48b8f8.el7.x86_64 openshift-ansible-docs-3.11.16-1.git.0.4ac6f81.el7.noarch atomic-openshift-excluder-3.11.16-1.git.0.b48b8f8.el7.noarch atomic-openshift-hyperkube-3.11.16-1.git.0.b48b8f8.el7.x86_64 atomic-openshift-node-3.11.16-1.git.0.b48b8f8.el7.x86_64 # How reproducible: ++++++++++++ 2*2 Steps to Reproduce: 1. Install OCP 3.11 with OCS 3.11 (both app-storage and glusterfs-registry) 2. In order to uninstall only 1 component, say glusterfs and keep glusterfs_reg intact, hash out the entries under [glusterfs_registry] 3. Run the uninstall playbook for OCS: ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml -e "openshift_storage_glusterfs_wipe=True" 4. Check the pod status in both the storage namespaces. It is seen that the glusterfs pods are terminated from the glusterfs_reg namespace as well. Actual results: +++++++++++++++ Even though glusterfs_reg was hashed out --------------- # oc get pods -o wide -n infra-storage -w NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-wszcx 1/1 Running 0 2d glusterfs-registry-7xzpt 1/1 Running 0 2d glusterfs-registry-8l8nk 1/1 Running 0 2d glusterfs-registry-lfnrt 1/1 Running 0 2d heketi-registry-1-nrdvx 1/1 Running 0 2d glusterfs-registry-lfnrt 1/1 Terminating 0 2d glusterfs-registry-lfnrt 0/1 Terminating 0 2d glusterfs-registry-lfnrt 0/1 Terminating 0 2d glusterfs-registry-lfnrt 0/1 Terminating 0 2d glusterfs-registry-lfnrt 0/1 Terminating 0 2d glusterfs-registry-8l8nk 1/1 Terminating 0 2d glusterfs-registry-7xzpt 1/1 Terminating 0 2d glusterfs-registry-8l8nk 0/1 Terminating 0 2d glusterfs-registry-7xzpt 0/1 Terminating 0 2d Expected results: ++++++++++++ Only the pods within the app-storage namespace should have been uninstalled. Additional info: Please attach logs from ansible-playbook with the -vvv flag - Attached Attached the inventory file
PR submitted: https://github.com/openshift/openshift-ansible/pull/10300 This is not a release blocker.
Not a 3.11.0 release blocker; moving to 3.11.z.
PR merged.
Please help check if this bug could be verified,thanks!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3537