Description of problem: By default, not specifying deletion options (like propagation_policy="Foreground") orphans Pods and ReplicationControllers when DeploymentConfigs are deleted. This is problematic as our APBs delete DeploymentConfigs on deprovision. How reproducible: Always Steps to Reproduce (from https://bugzilla.redhat.com/show_bug.cgi?id=1503523): 1. set up cluster with service catalog & ASB installed 2. provision MySQL (APB) though Web UI 3. deprovision it when provision finished Actual results (from https://bugzilla.redhat.com/show_bug.cgi?id=1503523): The resources created by provision are not deleted # oc get all -n wmeng1 NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfigs/mysql 1 1 1 config NAME READY STATUS RESTARTS AGE po/mysql-1-xxpzz 1/1 Running 1 2h NAME DESIRED CURRENT READY AGE rc/mysql-1 1 1 1 2h NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/mysql 172.30.61.92 <none> 3306/TCP 2h # oc get serviceinstance -n wmeng1 No resources found. Expected results (from https://bugzilla.redhat.com/show_bug.cgi?id=1503523): The resources created by provision should be deleted
https://github.com/openshift/openshift-restclient-python/pull/114
*** Bug 1503523 has been marked as a duplicate of this bug. ***
*** Bug 1507368 has been marked as a duplicate of this bug. ***
*** Bug 1495503 has been marked as a duplicate of this bug. ***
@David We found this fixed PR(#114) on above has not used by the apb-base image yet. Could you help to update it? Thanks a lot! :) Details of debug: [root@localhost jian]# docker images | grep apb-base docker.io/ansibleplaybookbundle/apb-base latest 377ef3cc1271 3 days ago 650.7 M [root@localhost jian]# docker run -it --entrypoint=/bin/bash docker.io/ansibleplaybookbundle/apb-base bash-4.2# cd /usr/lib/python2.7/site-packages bash-4.2# vi openshift/helper/base.py +278
@jiazha, if you are looking at the upstream docker images you'll need to use the canary tag until we cut another release for all of our images. I don't see our image mysql-apb (and the ones I do see are not updated) in registry.access.stage.redhat.com. What I do see is the latest downstream images that we have built have the changes: ``` $ docker run -it --entrypoint /bin/bash brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/mysql-apb:v3.7 bash-4.2$ cd /usr/lib/python2.7/site-packages/openshift/helper/ bash-4.2$ cat base.py def delete_object(self, name, namespace): self.logger.debug('Starting delete object {0} {1} {2}'.format(self.kind, name, namespace)) delete_method = self.lookup_method('delete', namespace) if not namespace: try: if 'body' in inspect.getargspec(delete_method).args: status_obj = delete_method(name, body=V1DeleteOptions(propagation_policy='Foreground')) ... else: try: if 'body' in inspect.getargspec(delete_method).args: status_obj = delete_method(name, namespace, body=V1DeleteOptions(propagation_policy='Foreground')) ... self._wait_for_response(name, namespace, 'delete') ```
jian zhang, we need to wait rhcc image(but not dockerhub) ready for test by formal test process. Change bug status to modify status to wait image ready.
@jiazha 1) I'll keep an eye on this and move it back to ON_QA when the stage registry images are updated 2) For the brew registry, you'll have to add it as an insecure registry, most likely in /etc/sysconfig/docker, and restart docker before you can pull those images. I can see that our images have been update (output below) but I don't have control over when those images are synced with the stage registry. I'll keep an eye out. ``` $ docker run -it --entrypoint /bin/bash brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/mysql-apb:v3.7 bash-4.2$ cat /usr/lib/python2.7/site-packages/openshift/helper/base.py | grep 'V1D' from kubernetes.client.models import V1DeleteOptions status_obj = delete_method(name, body=V1DeleteOptions(propagation_policy='Foreground')) status_obj = delete_method(name, namespace, body=V1DeleteOptions(propagation_policy='Foreground')) $ docker run -it --entrypoint /bin/bash brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/postgresql-apb:v3.7 bash-4.2$ cat /usr/lib/python2.7/site-packages/openshift/helper/base.py | grep 'V1D' from kubernetes.client.models import V1DeleteOptions status_obj = delete_method(name, body=V1DeleteOptions(propagation_policy='Foreground')) status_obj = delete_method(name, namespace, body=V1DeleteOptions(propagation_policy='Foreground')) $ docker run -it --entrypoint /bin/bash brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/mediawiki-apb:v3.7 bash-4.2$ cat /usr/lib/python2.7/site-packages/openshift/helper/base.py | grep 'V1D' from kubernetes.client.models import V1DeleteOptions status_obj = delete_method(name, body=V1DeleteOptions(propagation_policy='Foreground')) status_obj = delete_method(name, namespace, body=V1DeleteOptions(propagation_policy='Foreground')) $ docker run -it --entrypoint /bin/bash brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/mariadb-apb:v3.7 bash-4.2$ cat /usr/lib/python2.7/site-packages/openshift/helper/base.py | grep 'V1D' from kubernetes.client.models import V1DeleteOptions status_obj = delete_method(name, body=V1DeleteOptions(propagation_policy='Foreground')) status_obj = delete_method(name, namespace, body=V1DeleteOptions(propagation_policy='Foreground')) ```
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188