Bug 2039359
Summary: | `oc adm prune deployments` can't prune the RS where the associated Deployment no longer exists | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | zhou ying <yinzhou> |
Component: | oc | Assignee: | Ross Peoples <rpeoples> |
oc sub component: | oc | QA Contact: | zhou ying <yinzhou> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | high | ||
Priority: | high | CC: | aos-bugs, mfojtik |
Version: | 4.10 | ||
Target Milestone: | --- | ||
Target Release: | 4.10.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-03-11 18:15:11 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
zhou ying
2022-01-11 15:22:27 UTC
Can you try this again with the latest oc? There was a recent PR that fixes many prune issues. Still could reproduce with latest oc version : [root@localhost out]# oc version --client Client Version: 4.10.0-202201190323.p0.g05f6cd4.assembly.stream-05f6cd4 [root@localhost out]# oc adm prune deployments --replica-sets=true --orphans=true --keep-younger-than=1m --keep-complete=0 --confirm NAMESPACE NAME openshift-monitoring prometheus-adapter-7b9ff4d8c openshift-monitoring prometheus-adapter-865f7f4486 openshift-monitoring prometheus-adapter-7845f596d4 openshift-monitoring prometheus-adapter-84d8844484 openshift-monitoring prometheus-adapter-7d588b76cb openshift-monitoring prometheus-adapter-7c95648fc6 openshift-monitoring prometheus-adapter-bcd746c4 openshift-monitoring prometheus-adapter-79798985dc openshift-monitoring prometheus-adapter-6fc78c4b5 openshift-monitoring prometheus-adapter-68ffcbfb56 openshift-oauth-apiserver apiserver-7dc577f4df openshift-oauth-apiserver apiserver-78fffb5c5b openshift-image-registry image-registry-677f7cddd4 openshift-image-registry image-registry-84ff564788 openshift-apiserver apiserver-6bc7749666 openshift-apiserver apiserver-7bc5bd5bb6 openshift-apiserver apiserver-6dcd4699c9 openshift-apiserver apiserver-cd7796f7b openshift-apiserver apiserver-644b8845c openshift-console console-96bc8bf7 openshift-authentication oauth-openshift-77b8c84f84 openshift-authentication oauth-openshift-7ddfcb8c5d openshift-authentication oauth-openshift-65ff554f56 [root@localhost out]# oc get rs NAME DESIRED CURRENT READY AGE mydep-577c59f8d 0 0 0 25m mydep-589c48545c 0 0 0 24m mydep-75fd7468d4 0 0 0 18m mydep-789855b6cb 0 0 0 26m mydep-7f7f5f9bf7 0 0 0 18m mydep-98584d9cf 15 15 15 24m mydep-c4844b4b6 0 0 0 24m mydep-d7bd599f7 10 10 0 18m The issue has fixed : [root@localhost autoinfo]# oc version --client Client Version: 4.10.0-202201281850.p0.g7c299f1.assembly.stream-7c299f1 [root@localhost autoinfo]# oc delete deploy/mytest --cascade='orphan' deployment.apps "mytest" deleted [root@localhost autoinfo]# oc get rs --sort-by='{.metadata.creationTimestamp}' NAME DESIRED CURRENT READY AGE mytest-6d5fdcfd9f 0 0 0 8m1s mytest-6d5b995b5 0 0 0 7m24s mytest-76976fc7dd 0 0 0 7m20s mytest-7b87847f5f 0 0 0 7m18s mytest-6bd76f6d78 0 0 0 7m14s mytest-6cb7d4fd7c 15 15 15 7m11s mytest-7dcc47f77f 10 10 0 6m34s [root@localhost openshift-tests-private]# oc adm prune deployments --replica-sets=true --orphans=true --keep-younger-than=1m --keep-complete=0 --confirm NAMESPACE NAME zhouyt12 mytest-6bd76f6d78 zhouyt12 mytest-6d5b995b5 zhouyt12 mytest-6d5fdcfd9f zhouyt12 mytest-76976fc7dd zhouyt12 mytest-7b87847f5f openshift-monitoring prometheus-adapter-68b989fcc4 openshift-monitoring prometheus-adapter-689898487c openshift-monitoring prometheus-adapter-7968b6fb4c [root@localhost autoinfo]# oc get rs NAME DESIRED CURRENT READY AGE mytest-6cb7d4fd7c 15 15 15 14m mytest-7dcc47f77f 10 10 0 13m |