Bug 2063938 - sing the hard coded rest-mapper in library-go
Summary: sing the hard coded rest-mapper in library-go
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.11.0
Assignee: Abu Kashem
QA Contact: jmekkatt
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-14 16:36 UTC by Abu Kashem
Modified: 2022-08-10 10:54 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-10 10:54:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kubernetes pull 1215 0 None open Bug 2063938: UPSTREAM: <carry>: use hardcoded rest mapper from library-go 2022-03-15 13:55:45 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 10:54:21 UTC

Description Abu Kashem 2022-03-14 16:36:18 UTC
start using hard coded rest-mapper in library-go [1]. 

https://github.com/openshift/library-go/blob/master/pkg/client/openshiftrestmapper/hardcoded_restmapper.go#L10-L14

Currently, openshift kube-apiserver uses it own hard coded version https://github.com/openshift/kubernetes/blob/f0177609ff6f2d7a95aa319e4c5a8e32eaec6184/pkg/kubeapiserver/admission/patch_restmapper.go#L10-L77. We should start using the default rest mappings in libray-go

Comment 3 jmekkatt 2022-03-24 15:54:08 UTC
Verification steps for the fix as below

$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-03-20-160505   True        False         9h      Cluster version is 4.11.0-0.nightly-2022-03-20-160505


1. User mapping to a role binding to create restricted access.

Created role for pods with all access except delete.
$ oc get role pod-reader -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: "2022-03-24T14:22:55Z"
  name: pod-reader
  namespace: default
  resourceVersion: "223277"
  uid: bd0ccf49-b2e2-44fc-931c-e9677eef3a15
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - watch
  - list
  - update
  - patch

Created a rolebinding to map the testuser-0 to the above role.
$ oc get rolebinding  read-pods -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: "2022-03-24T14:23:04Z"
  name: read-pods
  namespace: default
  resourceVersion: "216553"
  uid: 28d77cf1-86bf-48dc-81b0-fa5872cb360e
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pod-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: testuser-0


2. Created two replicasets and corresponding pods.

$ cat replicaset.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "2"
    deployment.kubernetes.io/max-replicas: "5"
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2022-03-24T13:38:21Z"
  generation: 1
  labels:
    app: hello
    pod-template-hash: 84d58449c5
  name: hello
  namespace: default
  resourceVersion: "199669"
  uid: 0882ce27-61d9-4ed7-8b4f-ad9a3829e556
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
      pod-template-hash: 84d58449c5
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: hello
        pod-template-hash: 84d58449c5
    spec:
      containers:
      - image: quay.io/openshifttest/hello-openshift@sha256:1e70b596c05f46425c39add70bf749177d78c1e98b2893df4e5ae3883c2ffb5e
        imagePullPolicy: IfNotPresent
        name: hello
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  fullyLabeledReplicas: 1
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1

$ cat replicaset_second.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "2"
    deployment.kubernetes.io/max-replicas: "5"
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2022-03-24T13:38:21Z"
  generation: 1
  labels:
    app: hello-second
    pod-template-hash: 84d58449c5
  name: hello-second
  namespace: default
  resourceVersion: "199669"
  uid: 0882ce27-61d9-4ed7-8b4f-ad9a3829e556
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-second
      pod-template-hash: 84d58449c5
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: hello-second
        pod-template-hash: 84d58449c5
    spec:
      containers:
      - image: quay.io/openshifttest/hello-openshift@sha256:1e70b596c05f46425c39add70bf749177d78c1e98b2893df4e5ae3883c2ffb5e
        imagePullPolicy: IfNotPresent
        name: hello-second
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  fullyLabeledReplicas: 1
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1

$ oc create -f replicaset.yaml 
replicaset.apps/hello created

$ oc create -f replicaset_second.yaml
replicaset.apps/hello-second created

$ oc get replicaset -n default
NAME           DESIRED   CURRENT   READY   AGE
hello          1         1         1       88m
hello-second   1         1         1       85m


3. Evaluate the hardcoded_restmapper change with the help of default admission control "OwnerReferencesPermissionEnforcement" with below steps. 

Login as testuser-0.
$ oc login -u testuser-0 -p aeg-GMuhh-6Z
Login successful.

list the all the available pods 
$ oc get pods -n default
NAME                 READY   STATUS    RESTARTS   AGE
hello-mt57r          1/1     Running   0          16m
hello-second-v8fb5   1/1     Running   0          48m


Edit and change the pod definition i.e under ownerReferences, change the name of the replicaset with another one as below.
Change pod definition from
apiVersion: v1
kind: Pod
<TRIMMED>
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: hello
    uid: 336ef9b6-32d8-4d62-8978-4a86dd381a6b
<TRIMMED>

to the pod definition
apiVersion: v1
kind: Pod
<TRIMMED>
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: hello-second
    uid: 336ef9b6-32d8-4d62-8978-4a86dd381a6b
<TRIMMED>

$ oc edit pod hello-mt57r
error: pods "hello-mt57r" could not be patched: pods "hello-mt57r" is forbidden: cannot set an ownerRef on a resource you can't delete: , <nil>

$ oc get pod hello-mt57r -o yaml
apiVersion: v1
kind: Pod
metadata:
  <TRIMMED>
  name: hello-mt57r
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: hello
    uid: 336ef9b6-32d8-4d62-8978-4a86dd381a6b
  resourceVersion: "222736"
  uid: d89f2b35-d082-4b7e-949f-fa7a520571ba
spec:
  containers:
  - image: quay.io/openshifttest/hello-openshift@sha256:1e70b596c05f46425c39add70bf749177d78c1e98b2893df4e5ae3883c2ffb5e
    imagePullPolicy: IfNotPresent
    name: hello
<TRIMMED>

The error while editing & saving the pod definition, clearly indicate that the "OwnerReferencesPermissionEnforcement" admission controller invoked successfully as the testuser-0 can't change the ownerReferences definition due to missing delete permission from role definition.Also the testuser-0 could list or get the pod definitions without any issue.
Hence the default OwnerReferencesPermissionEnforcement works as expected , it also confirms that the hardcoded_restmapper change also works .

Comment 4 jmekkatt 2022-03-25 13:57:58 UTC
Inline steps confirms that the PR has been merged with the tested 4.11 build.

$ oc adm release info --commits registry.ci.openshift.org/ocp/release:4.11.0-0.nightly-2022-03-20-160505 --registry-config=/home/jmekkatt/docker/config.json | grep hyperkube
  hyperkube                                      https://github.com/openshift/kubernetes                                     02aefbfd4f05e308ba3e645f76190a42db159805

$cd kubernetes
$ git log --date local --pretty="%h %an %cd - %s" 02aefbfd | grep "#1215"
21cef0d2648 OpenShift Merge Robot Thu Mar 17 11:47:44 2022 - Merge pull request #1215 from tkashem/rest-mappings

Hence with the tested steps, moved ticket state to "verified".

Comment 7 errata-xmlrpc 2022-08-10 10:54:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.