Bug 1915661 - Can't run the 'oc adm prune' command in a pod
Summary: Can't run the 'oc adm prune' command in a pod
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: openshift-apiserver
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.0
Assignee: Maciej Szulik
QA Contact: zhou ying
URL:
Whiteboard:
: 1915242 1915902 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-13 07:59 UTC by zhou ying
Modified: 2021-02-24 15:52 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Changes in pruning in oc command, specifically adding support for monitoring jobs, cronjobs and daemon sets. Consequence: The pruner did not have necessary access rights to list jobs, cronjobs and daemon sets. Fix: Change system:image-pruner role to include necessary access rights. Result: Pruning works as expected.
Clone Of:
Environment:
Last Closed: 2021-02-24 15:52:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-apiserver pull 177 0 None closed Bug 1915661: update image-pruner role to include jobs, cronjobs and statefulsets 2021-02-18 16:03:02 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:52:56 UTC

Description zhou ying 2021-01-13 07:59:08 UTC
Description of problem:
Can't run the 'oc adm prune' command in a pod

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1) Assign the  default service account with clusterrole: system:image-pruner;
`oc adm policy add-cluster-role-to-user system:image-pruner  -z default`  

2) Get the latest oc cli image from:
`oc get imagestreams cli --output=yaml -n openshift`

3) Create pod with the oc cli image:
apiVersion: v1
kind: Pod
metadata:
  name: cli
  labels:
    name: cli
spec:
  restartPolicy: OnFailure
  containers:
  - name: cli
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1184805bb6d858ff190c98400123b057edc15ca010a401301bb02f386fa3c0b
    command: [ "oc" ]
    args: 
      - "adm"
      - "prune"
      - "images"
      - "--force-insecure=true"
      - "--prune-registry=false"

Actual results:
1. Pod always failed with error:
[root@dhcp-140-138 ~]# oc logs -f po/cli
Error from server (Forbidden): statefulsets.apps is forbidden: User "system:serviceaccount:zhouy2:default" cannot list resource "statefulsets" in API group "apps" at the cluster scope

Expected results:
3) No error

Additional info:

Comment 1 Ricardo Maraschini 2021-01-13 12:09:35 UTC
*** Bug 1915242 has been marked as a duplicate of this bug. ***

Comment 3 Oleg Bulatov 2021-01-14 11:36:05 UTC
*** Bug 1915902 has been marked as a duplicate of this bug. ***

Comment 5 zhou ying 2021-01-15 05:32:22 UTC
Confirmed with payload: , the issue has fixed:
[root@dhcp-140-138 roottest]# oc get clusterversion 
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2021-01-14-211319   True        False         91m     Cluster version is 4.7.0-0.nightly-2021-01-14-211319

[root@dhcp-140-138 roottest]# oc get po 
NAME   READY   STATUS    RESTARTS   AGE
cli    1/1     Running   0          4s
[root@dhcp-140-138 roottest]# oc logs -f po/cli
Only API objects will be removed.  No modifications to the image registry will be made.
Dry run enabled - no modifications will be made. Add --confirm to remove images
Summary: deleted 0 objects
[root@dhcp-140-138 roottest]# oc get po 
NAME   READY   STATUS      RESTARTS   AGE
cli    0/1     Completed   0          16s

Comment 6 Robert Heinzmann 2021-01-17 08:14:30 UTC
Seeing the same thing during an update from 4.6.9 -> 4.7.0-fc.2, blocking the update

NAMESPACE                                          NAME                                                      READY   STATUS      RESTARTS   AGE
openshift-image-registry                           image-pruner-1610841600-zvp8x                             0/1     Error       0          8h

[stack@osp16amd ocp-test1]$ oc logs image-pruner-1610841600-zvp8x -n openshift-image-registry
Error from server (Forbidden): statefulsets.apps is forbidden: User "system:serviceaccount:openshift-image-registry:pruner" cannot list resource "statefulsets" in API group "apps" at the cluster scope

NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.9     True        True          10h     Unable to apply 4.7.0-fc.2: an unknown error has occurred: MultipleErrors

Clusteroperator
NAME                                       VERSION      AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.7.0-fc.2   True        False         True       122m
baremetal                                  4.7.0-fc.2   True        False         False      9h
cloud-credential                           4.7.0-fc.2   True        False         False      11h
cluster-autoscaler                         4.7.0-fc.2   True        False         False      11h
config-operator                            4.7.0-fc.2   True        False         False      11h
console                                    4.7.0-fc.2   True        False         False      123m
csi-snapshot-controller                    4.7.0-fc.2   True        False         False      9h
dns                                        4.6.9        True        False         False      11h
etcd                                       4.7.0-fc.2   True        False         False      11h
image-registry                             4.7.0-fc.2   True        False         True       10h
ingress                                    4.7.0-fc.2   True        False         False      10h
insights                                   4.7.0-fc.2   True        False         False      11h
kube-apiserver                             4.7.0-fc.2   True        False         False      11h
kube-controller-manager                    4.7.0-fc.2   True        False         False      11h
kube-scheduler                             4.7.0-fc.2   True        False         False      11h
kube-storage-version-migrator              4.7.0-fc.2   True        False         False      10h
machine-api                                4.7.0-fc.2   True        False         False      11h
machine-approver                           4.7.0-fc.2   True        False         False      11h
machine-config                             4.6.9        True        False         False      11h
marketplace                                4.7.0-fc.2   True        False         False      9h
monitoring                                 4.7.0-fc.2   True        False         False      3h7m
network                                    4.7.0-fc.2   True        False         False      9h
node-tuning                                4.7.0-fc.2   True        False         False      9h
openshift-apiserver                        4.7.0-fc.2   True        False         False      123m
openshift-controller-manager               4.7.0-fc.2   True        False         False      11h
openshift-samples                          4.7.0-fc.2   True        False         False      9h
operator-lifecycle-manager                 4.7.0-fc.2   True        False         False      11h
operator-lifecycle-manager-catalog         4.7.0-fc.2   True        False         False      11h
operator-lifecycle-manager-packageserver   4.7.0-fc.2   True        False         False      123m
service-ca                                 4.7.0-fc.2   True        False         False      11h
storage                                    4.7.0-fc.2   True        False         False      9h

Comment 11 errata-xmlrpc 2021-02-24 15:52:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.