Bug 1915661
Summary: | Can't run the 'oc adm prune' command in a pod | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | zhou ying <yinzhou> |
Component: | openshift-apiserver | Assignee: | Maciej Szulik <maszulik> |
Status: | CLOSED ERRATA | QA Contact: | zhou ying <yinzhou> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.7 | CC: | aos-bugs, apjagtap, cblecker, ckoep, cshereme, esimard, jokerman, mfojtik, nelluri, pamoedom, rbohne, rheinzma, trees, wking, wzheng |
Target Milestone: | --- | Keywords: | Regression, ServiceDeliveryImpact |
Target Release: | 4.7.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
Changes in pruning in oc command, specifically adding support for monitoring jobs, cronjobs and daemon sets.
Consequence:
The pruner did not have necessary access rights to list jobs, cronjobs and daemon sets.
Fix:
Change system:image-pruner role to include necessary access rights.
Result:
Pruning works as expected.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2021-02-24 15:52:40 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
zhou ying
2021-01-13 07:59:08 UTC
*** Bug 1915242 has been marked as a duplicate of this bug. *** *** Bug 1915902 has been marked as a duplicate of this bug. *** Confirmed with payload: , the issue has fixed: [root@dhcp-140-138 roottest]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-01-14-211319 True False 91m Cluster version is 4.7.0-0.nightly-2021-01-14-211319 [root@dhcp-140-138 roottest]# oc get po NAME READY STATUS RESTARTS AGE cli 1/1 Running 0 4s [root@dhcp-140-138 roottest]# oc logs -f po/cli Only API objects will be removed. No modifications to the image registry will be made. Dry run enabled - no modifications will be made. Add --confirm to remove images Summary: deleted 0 objects [root@dhcp-140-138 roottest]# oc get po NAME READY STATUS RESTARTS AGE cli 0/1 Completed 0 16s Seeing the same thing during an update from 4.6.9 -> 4.7.0-fc.2, blocking the update NAMESPACE NAME READY STATUS RESTARTS AGE openshift-image-registry image-pruner-1610841600-zvp8x 0/1 Error 0 8h [stack@osp16amd ocp-test1]$ oc logs image-pruner-1610841600-zvp8x -n openshift-image-registry Error from server (Forbidden): statefulsets.apps is forbidden: User "system:serviceaccount:openshift-image-registry:pruner" cannot list resource "statefulsets" in API group "apps" at the cluster scope NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.9 True True 10h Unable to apply 4.7.0-fc.2: an unknown error has occurred: MultipleErrors Clusteroperator NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0-fc.2 True False True 122m baremetal 4.7.0-fc.2 True False False 9h cloud-credential 4.7.0-fc.2 True False False 11h cluster-autoscaler 4.7.0-fc.2 True False False 11h config-operator 4.7.0-fc.2 True False False 11h console 4.7.0-fc.2 True False False 123m csi-snapshot-controller 4.7.0-fc.2 True False False 9h dns 4.6.9 True False False 11h etcd 4.7.0-fc.2 True False False 11h image-registry 4.7.0-fc.2 True False True 10h ingress 4.7.0-fc.2 True False False 10h insights 4.7.0-fc.2 True False False 11h kube-apiserver 4.7.0-fc.2 True False False 11h kube-controller-manager 4.7.0-fc.2 True False False 11h kube-scheduler 4.7.0-fc.2 True False False 11h kube-storage-version-migrator 4.7.0-fc.2 True False False 10h machine-api 4.7.0-fc.2 True False False 11h machine-approver 4.7.0-fc.2 True False False 11h machine-config 4.6.9 True False False 11h marketplace 4.7.0-fc.2 True False False 9h monitoring 4.7.0-fc.2 True False False 3h7m network 4.7.0-fc.2 True False False 9h node-tuning 4.7.0-fc.2 True False False 9h openshift-apiserver 4.7.0-fc.2 True False False 123m openshift-controller-manager 4.7.0-fc.2 True False False 11h openshift-samples 4.7.0-fc.2 True False False 9h operator-lifecycle-manager 4.7.0-fc.2 True False False 11h operator-lifecycle-manager-catalog 4.7.0-fc.2 True False False 11h operator-lifecycle-manager-packageserver 4.7.0-fc.2 True False False 123m service-ca 4.7.0-fc.2 True False False 11h storage 4.7.0-fc.2 True False False 9h Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633 |