Bug 2039359 - `oc adm prune deployments` can't prune the RS where the associated Deployment no longer exists
Summary: `oc adm prune deployments` can't prune the RS where the associated Deploymen...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.10.0
Assignee: Ross Peoples
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-11 15:22 UTC by zhou ying
Modified: 2022-03-11 18:15 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-11 18:15:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift oc pull 1030 0 None open Bug 2039359: Fix adm prune rs orphans 2022-01-19 19:07:17 UTC

Description zhou ying 2022-01-11 15:22:27 UTC
Description of problem:
`oc adm prune deployments` can't prune the RS  where the associated DeploymentConfig no longer exists

Version-Release number of selected component (if applicable):
[root@localhost ~]# oc version --client
Client Version: 4.10.0-202201102246.p0.g293b52e.assembly.stream-293b52e

How reproducible:
always

Steps to Reproduce:
1. Create deploy;
2. Trigger the deploy with succeed and failed status more than 7 times;
3. List the RSs:

4. Delete the deploy with --cascade='orphan'. leave the RSs as orphaned objects;
[root@localhost ~]# oc delete deploy/hello-openshift --cascade='orphan'
deployment.apps "hello-openshift" deleted

[root@localhost ~]# oc get rs --sort-by='{.metadata.creationTimestamp}'
NAME                         DESIRED   CURRENT   READY   AGE
hello-openshift-b97f9584b    0         0         0       15m
hello-openshift-64746b5c7b   0         0         0       11m
hello-openshift-f58c964df    0         0         0       9m20s
hello-openshift-54c7d9b5fd   0         0         0       7m21s
hello-openshift-768995fdcf   0         0         0       6m12s
hello-openshift-9cc9fd576    15        15        15      4m48s
hello-openshift-67b855889f   0         0         0       3m34s
hello-openshift-668b4c596    10        10        0       3m8s
[root@localhost ~]# oc get deploy
No resources found in zhout namespace.

5. Delete the RS with --orphans=true; 

Actual results:
5. Can't prune the orphaned RSs :
[root@localhost ~]# oc adm prune deployments  --replica-sets=true  --orphans=true --keep-younger-than=1m  --keep-complete=0
Dry run enabled - no modifications will be made. Add --confirm to remove deployments
NAMESPACE              NAME
openshift-monitoring   prometheus-adapter-7d544b4c9c
[root@localhost ~]# oc get rs
NAME                         DESIRED   CURRENT   READY   AGE
hello-openshift-54c7d9b5fd   0         0         0       9m51s
hello-openshift-64746b5c7b   0         0         0       14m
hello-openshift-668b4c596    10        10        0       5m38s
hello-openshift-67b855889f   0         0         0       6m4s
hello-openshift-768995fdcf   0         0         0       8m42s
hello-openshift-9cc9fd576    15        15        15      7m18s
hello-openshift-b97f9584b    0         0         0       17m
hello-openshift-f58c964df    0         0         0       11m


Expected results:
5. Should prune the orphaned RS as the RC. 

Additional info:

Comment 1 Ross Peoples 2022-01-14 14:52:23 UTC
Can you try this again with the latest oc? There was a recent PR that fixes many prune issues.

Comment 3 zhou ying 2022-01-19 09:12:51 UTC
Still could reproduce with latest oc version :

[root@localhost out]# oc version --client
Client Version: 4.10.0-202201190323.p0.g05f6cd4.assembly.stream-05f6cd4

[root@localhost out]# oc adm prune deployments  --replica-sets=true  --orphans=true --keep-younger-than=1m  --keep-complete=0 --confirm
NAMESPACE                   NAME
openshift-monitoring        prometheus-adapter-7b9ff4d8c
openshift-monitoring        prometheus-adapter-865f7f4486
openshift-monitoring        prometheus-adapter-7845f596d4
openshift-monitoring        prometheus-adapter-84d8844484
openshift-monitoring        prometheus-adapter-7d588b76cb
openshift-monitoring        prometheus-adapter-7c95648fc6
openshift-monitoring        prometheus-adapter-bcd746c4
openshift-monitoring        prometheus-adapter-79798985dc
openshift-monitoring        prometheus-adapter-6fc78c4b5
openshift-monitoring        prometheus-adapter-68ffcbfb56
openshift-oauth-apiserver   apiserver-7dc577f4df
openshift-oauth-apiserver   apiserver-78fffb5c5b
openshift-image-registry    image-registry-677f7cddd4
openshift-image-registry    image-registry-84ff564788
openshift-apiserver         apiserver-6bc7749666
openshift-apiserver         apiserver-7bc5bd5bb6
openshift-apiserver         apiserver-6dcd4699c9
openshift-apiserver         apiserver-cd7796f7b
openshift-apiserver         apiserver-644b8845c
openshift-console           console-96bc8bf7
openshift-authentication    oauth-openshift-77b8c84f84
openshift-authentication    oauth-openshift-7ddfcb8c5d
openshift-authentication    oauth-openshift-65ff554f56
[root@localhost out]# oc get rs
NAME               DESIRED   CURRENT   READY   AGE
mydep-577c59f8d    0         0         0       25m
mydep-589c48545c   0         0         0       24m
mydep-75fd7468d4   0         0         0       18m
mydep-789855b6cb   0         0         0       26m
mydep-7f7f5f9bf7   0         0         0       18m
mydep-98584d9cf    15        15        15      24m
mydep-c4844b4b6    0         0         0       24m
mydep-d7bd599f7    10        10        0       18m

Comment 7 zhou ying 2022-01-30 03:59:41 UTC
The issue has fixed :

[root@localhost autoinfo]# oc version --client
Client Version: 4.10.0-202201281850.p0.g7c299f1.assembly.stream-7c299f1


[root@localhost autoinfo]# oc delete  deploy/mytest --cascade='orphan'
deployment.apps "mytest" deleted

[root@localhost autoinfo]# oc get rs --sort-by='{.metadata.creationTimestamp}'
NAME                DESIRED   CURRENT   READY   AGE
mytest-6d5fdcfd9f   0         0         0       8m1s
mytest-6d5b995b5    0         0         0       7m24s
mytest-76976fc7dd   0         0         0       7m20s
mytest-7b87847f5f   0         0         0       7m18s
mytest-6bd76f6d78   0         0         0       7m14s
mytest-6cb7d4fd7c   15        15        15      7m11s
mytest-7dcc47f77f   10        10        0       6m34s

[root@localhost openshift-tests-private]# oc adm prune deployments  --replica-sets=true  --orphans=true --keep-younger-than=1m  --keep-complete=0 --confirm
NAMESPACE              NAME
zhouyt12               mytest-6bd76f6d78
zhouyt12               mytest-6d5b995b5
zhouyt12               mytest-6d5fdcfd9f
zhouyt12               mytest-76976fc7dd
zhouyt12               mytest-7b87847f5f
openshift-monitoring   prometheus-adapter-68b989fcc4
openshift-monitoring   prometheus-adapter-689898487c
openshift-monitoring   prometheus-adapter-7968b6fb4c


[root@localhost autoinfo]# oc get rs 
NAME                DESIRED   CURRENT   READY   AGE
mytest-6cb7d4fd7c   15        15        15      14m
mytest-7dcc47f77f   10        10        0       13m


Note You need to log in before you can comment on or make changes to this bug.