Description of problem: Can't run the 'oc adm prune' command in a pod Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1) Assign the default service account with clusterrole: system:image-pruner; `oc adm policy add-cluster-role-to-user system:image-pruner -z default` 2) Get the latest oc cli image from: `oc get imagestreams cli --output=yaml -n openshift` 3) Create pod with the oc cli image: apiVersion: v1 kind: Pod metadata: name: cli labels: name: cli spec: restartPolicy: OnFailure containers: - name: cli image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1184805bb6d858ff190c98400123b057edc15ca010a401301bb02f386fa3c0b command: [ "oc" ] args: - "adm" - "prune" - "images" - "--force-insecure=true" - "--prune-registry=false" Actual results: 1. Pod always failed with error: [root@dhcp-140-138 ~]# oc logs -f po/cli Error from server (Forbidden): statefulsets.apps is forbidden: User "system:serviceaccount:zhouy2:default" cannot list resource "statefulsets" in API group "apps" at the cluster scope Expected results: 3) No error Additional info:
*** Bug 1915242 has been marked as a duplicate of this bug. ***
*** Bug 1915902 has been marked as a duplicate of this bug. ***
Confirmed with payload: , the issue has fixed: [root@dhcp-140-138 roottest]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-01-14-211319 True False 91m Cluster version is 4.7.0-0.nightly-2021-01-14-211319 [root@dhcp-140-138 roottest]# oc get po NAME READY STATUS RESTARTS AGE cli 1/1 Running 0 4s [root@dhcp-140-138 roottest]# oc logs -f po/cli Only API objects will be removed. No modifications to the image registry will be made. Dry run enabled - no modifications will be made. Add --confirm to remove images Summary: deleted 0 objects [root@dhcp-140-138 roottest]# oc get po NAME READY STATUS RESTARTS AGE cli 0/1 Completed 0 16s
Seeing the same thing during an update from 4.6.9 -> 4.7.0-fc.2, blocking the update NAMESPACE NAME READY STATUS RESTARTS AGE openshift-image-registry image-pruner-1610841600-zvp8x 0/1 Error 0 8h [stack@osp16amd ocp-test1]$ oc logs image-pruner-1610841600-zvp8x -n openshift-image-registry Error from server (Forbidden): statefulsets.apps is forbidden: User "system:serviceaccount:openshift-image-registry:pruner" cannot list resource "statefulsets" in API group "apps" at the cluster scope NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.9 True True 10h Unable to apply 4.7.0-fc.2: an unknown error has occurred: MultipleErrors Clusteroperator NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0-fc.2 True False True 122m baremetal 4.7.0-fc.2 True False False 9h cloud-credential 4.7.0-fc.2 True False False 11h cluster-autoscaler 4.7.0-fc.2 True False False 11h config-operator 4.7.0-fc.2 True False False 11h console 4.7.0-fc.2 True False False 123m csi-snapshot-controller 4.7.0-fc.2 True False False 9h dns 4.6.9 True False False 11h etcd 4.7.0-fc.2 True False False 11h image-registry 4.7.0-fc.2 True False True 10h ingress 4.7.0-fc.2 True False False 10h insights 4.7.0-fc.2 True False False 11h kube-apiserver 4.7.0-fc.2 True False False 11h kube-controller-manager 4.7.0-fc.2 True False False 11h kube-scheduler 4.7.0-fc.2 True False False 11h kube-storage-version-migrator 4.7.0-fc.2 True False False 10h machine-api 4.7.0-fc.2 True False False 11h machine-approver 4.7.0-fc.2 True False False 11h machine-config 4.6.9 True False False 11h marketplace 4.7.0-fc.2 True False False 9h monitoring 4.7.0-fc.2 True False False 3h7m network 4.7.0-fc.2 True False False 9h node-tuning 4.7.0-fc.2 True False False 9h openshift-apiserver 4.7.0-fc.2 True False False 123m openshift-controller-manager 4.7.0-fc.2 True False False 11h openshift-samples 4.7.0-fc.2 True False False 9h operator-lifecycle-manager 4.7.0-fc.2 True False False 11h operator-lifecycle-manager-catalog 4.7.0-fc.2 True False False 11h operator-lifecycle-manager-packageserver 4.7.0-fc.2 True False False 123m service-ca 4.7.0-fc.2 True False False 11h storage 4.7.0-fc.2 True False False 9h
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633