Bug 1684368 - `oc adm prune deployments` could not delete the deployer pod
Summary: `oc adm prune deployments` could not delete the deployer pod
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 4.1.0
Assignee: Maciej Szulik
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-01 05:57 UTC by zhou ying
Modified: 2019-06-04 10:44 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: With the new GC deletion mechanism we weren't setting the propagation policy properly with oc adm prune deployments command. Consequence: The deployer pod was not removed. Fix: Set appropriate deletion options. Result: oc adm prune deployments correctly removes all its dependants.
Clone Of:
Environment:
Last Closed: 2019-06-04 10:44:51 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:44:57 UTC

Description zhou ying 2019-03-01 05:57:31 UTC
Description of problem:
Use the command `oc adm prune deployment` only delete the RC, the related deployer pod could not be deleted.

Version-Release number of selected component (if applicable):
Client Version: v4.0.6
Server Version: v1.12.4+0cbcfc5afe

How reproducible:
Always

Steps to Reproduce:
1. Create a project, and create an app;
   `oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git`
2. Rollout more than 5  deployments:
   `oc rollout latest dc/ruby-ex`
3. Use the command: 
   `oc adm prune deployments --keep-complete=1 --keep-younger-than=10m  --loglevel=6 --confirm`

Actual results:
3. Only the RC could be deleted , the deployer pod still exist:
[root@preserve-master-yinzhou ~]# oc adm prune deployments --keep-complete=1 --keep-younger-than=10m  --loglevel=6 --confirm
I0301 00:21:12.723659   16584 loader.go:359] Config loaded from file /root/0221/zhouy/auth/kubeconfig
I0301 00:21:12.934121   16584 round_trippers.go:405] GET https://api.qe-yinzhou.qe.devcluster.openshift.com:6443/apis/apps.openshift.io/v1/deploymentconfigs 200 OK in 209 milliseconds
I0301 00:21:12.969963   16584 round_trippers.go:405] GET https://api.qe-yinzhou.qe.devcluster.openshift.com:6443/api/v1/replicationcontrollers 200 OK in 28 milliseconds
I0301 00:21:12.997500   16584 prune.go:54] Creating deployment pruner with keepYoungerThan=10m0s, orphans=false, keepComplete=1, keepFailed=1
I0301 00:21:12.997593   16584 prune.go:113] Deleting deployment "ruby-ex-9"
I0301 00:21:13.030478   16584 round_trippers.go:405] DELETE https://api.qe-yinzhou.qe.devcluster.openshift.com:6443/api/v1/namespaces/zhouy/replicationcontrollers/ruby-ex-9 200 OK in 32 milliseconds
I0301 00:21:13.030880   16584 prune.go:113] Deleting deployment "ruby-ex-7"
I0301 00:21:13.063955   16584 round_trippers.go:405] DELETE https://api.qe-yinzhou.qe.devcluster.openshift.com:6443/api/v1/namespaces/zhouy/replicationcontrollers/ruby-ex-7 200 OK in 33 milliseconds
NAMESPACE   NAME
zhouy       ruby-ex-9
zhouy       ruby-ex-7

[root@dhcp-140-138 ~]# oc get po 
NAME                READY   STATUS             RESTARTS   AGE
ruby-ex-1-build     0/1     Completed          0          146m
ruby-ex-10-deploy   0/1     DeadlineExceeded   0          120m
ruby-ex-11-deploy   0/1     Completed          0          29m
ruby-ex-12-4w9sb    1/1     Running            0          25m
ruby-ex-12-deploy   0/1     Completed          0          26m
ruby-ex-3-deploy    0/1     Completed          0          138m
ruby-ex-4-deploy    0/1     Completed          0          136m
ruby-ex-5-deploy    0/1     Completed          0          129m
ruby-ex-6-deploy    0/1     Completed          0          122m
ruby-ex-7-deploy    0/1     Completed          0          122m
ruby-ex-9-deploy    0/1     Completed          0          121m
[root@dhcp-140-138 ~]# oc get rc
NAME         DESIRED   CURRENT   READY   AGE
ruby-ex-10   0         0         0       121m
ruby-ex-11   0         0         0       29m
ruby-ex-12   1         1         1       26m


Expected results:
3. The RC and related deployer pod should be deleted at the same time.

Additional info:
When use the `oc adm prune builds` the build and builder pod deleted at the same time.

Comment 1 Maciej Szulik 2019-03-01 20:57:41 UTC
Fix in https://github.com/openshift/origin/pull/22211

Comment 2 zhou ying 2019-04-02 02:11:56 UTC
Confirmed with latest ocp, the issue has fixed:
[zhouying@dhcp-140-138 extended]$ oc version --short
Client Version: v4.0.22
Server Version: v1.12.4+87e98f4
Payload: 4.0.0-0.nightly-2019-03-28-030453

[zhouying@dhcp-140-138 extended]$ oc adm prune deployments --keep-complete=1 --keep-younger-than=1m   --confirm
NAMESPACE   NAME
zhouyt      ruby-ex-5
zhouyt      ruby-ex-4
zhouyt      ruby-ex-3
zhouyt      ruby-ex-2
zhouyt      ruby-ex-1
[zhouying@dhcp-140-138 extended]$ oc get po 
NAME               READY   STATUS      RESTARTS   AGE
ruby-ex-3-build    0/1     Completed   0          25m
ruby-ex-4-build    0/1     Completed   0          23m
ruby-ex-5-build    0/1     Completed   0          19m
ruby-ex-6-build    0/1     Completed   0          18m
ruby-ex-6-deploy   0/1     Completed   0          16m
ruby-ex-7-build    0/1     Completed   0          14m
ruby-ex-7-deploy   0/1     Completed   0          13m
ruby-ex-7-ggrdx    1/1     Running     0          13m
[zhouying@dhcp-140-138 extended]$ oc get rc
NAME        DESIRED   CURRENT   READY   AGE
ruby-ex-6   0         0         0       16m
ruby-ex-7   1         1         1       13m

Comment 4 errata-xmlrpc 2019-06-04 10:44:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.