Bug 1452010 - Need have the admission controller PodPreset enabled
Summary: Need have the admission controller PodPreset enabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.6.z
Assignee: Derek Carr
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-18 07:15 UTC by Weihua Meng
Modified: 2017-10-25 13:02 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-10-25 13:02:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:3049 0 normal SHIPPED_LIVE OpenShift Container Platform 3.6, 3.5, and 3.4 bug fix and enhancement update 2017-10-25 15:57:15 UTC

Description Weihua Meng 2017-05-18 07:15:11 UTC
Description of problem:
We need to have the admission controller PodPreset enabled to make Pod Preset work. 

Version-Release number of selected component (if applicable):
openshift v3.6.76
kubernetes v1.6.1+5115d708d7
etcd 3.1.0

How reproducible:
Always

Steps to Reproduce:
1. create podpreset
2. create pod with label matching existing podpreset 
3. exam the pod, oc get pod -o yaml

Actual results:
no change happened to pod

Expected results:
pod spec modified according to podpreset

Additional info:

Comment 2 Derek Carr 2017-06-07 15:05:52 UTC
PodPreset will default to off, but here is the PR that adds support in Origin:
https://github.com/openshift/origin/pull/14461

Comment 3 Derek Carr 2017-06-12 15:07:37 UTC
PR merged.

Comment 4 DeShuai Ma 2017-06-14 08:37:56 UTC
This bug is on latest ocp36, move to ON_QA

Comment 5 DeShuai Ma 2017-06-14 08:43:52 UTC
# openshift version
openshift v3.6.106
kubernetes v1.6.1+5115d708d7
etcd 3.2.0

Steps to verify:
1. enable PodPreset admission controller in master-config.yaml as below
-------
admissionConfig:
  pluginConfig:
    PodPreset:
      configuration:
        kind: DefaultAdmissionConfig
        apiVersion: v1
        disable: false

2. Create a PodPreset and pod(the pod need match the label)
# cat podpreset.yaml 
kind: PodPreset
apiVersion: settings.k8s.io/v1alpha1
metadata:
  name: allow-database
spec:
  selector:
    matchLabels:
      role: frontend
  env:
    - name: DB_PORT
      value: "6379"
  volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
    - name: cache-volume
      emptyDir: {}
# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: hello-pod
    role: frontend
  name: hello-pod
spec:
  containers:
    - image: "docker.io/deshuai/hello-pod:latest"
      imagePullPolicy: IfNotPresent
      name: hello-pod
      ports:
        - containerPort: 8080
          protocol: TCP
      resources: {}
      securityContext:
        capabilities: {}
        privileged: false
      terminationMessagePath: /dev/termination-log
      volumeMounts:
        - mountPath: /tmp
          name: tmp
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  serviceAccount: ""
  volumes:
    - emptyDir: {}
      name: tmp

3. When pod is running, check the podpreset related value is injected to pod
[root@qe-dma36-master-1 tmp]# oc get po hello-pod
NAME        READY     STATUS    RESTARTS   AGE
hello-pod   1/1       Running   0          6m
[root@qe-dma36-master-1 tmp]# oc get po hello-pod -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: anyuid
    podpreset.admission.kubernetes.io/allow-database: "8352"
  creationTimestamp: 2017-06-14T08:36:39Z
  labels:
    name: hello-pod
    role: frontend
  name: hello-pod
  namespace: default
  resourceVersion: "8386"
  selfLink: /api/v1/namespaces/default/pods/hello-pod
  uid: 95e85316-50dc-11e7-948a-42010af00013
spec:
  containers:
  - env:
    - name: DB_PORT
      value: "6379"
    image: docker.io/deshuai/hello-pod:latest
    imagePullPolicy: IfNotPresent
    name: hello-pod
    ports:
    - containerPort: 8080
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        drop:
        - MKNOD
        - SYS_CHROOT
      privileged: false
      seLinuxOptions:
        level: s0:c6,c5
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tmp
      name: tmp
    - mountPath: /cache
      name: cache-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-bq787
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-dockercfg-25t3s
  nodeName: qe-dma36-node-registry-router-1
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    seLinuxOptions:
      level: s0:c6,c5
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - emptyDir: {}
    name: tmp
  - emptyDir: {}
    name: cache-volume
  - name: default-token-bq787
    secret:
      defaultMode: 420
      secretName: default-token-bq787
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2017-06-14T08:36:39Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2017-06-14T08:36:43Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2017-06-14T08:36:39Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://c0a6090283eed44040786491ecec652485beea5f277c7278774a5fdee1463408
    image: docker.io/deshuai/hello-pod:latest
    imageID: docker-pullable://docker.io/deshuai/hello-pod@sha256:90b815d55c95fffafd7b68a997787d0b939cdae1bca785c6f52b5d3ffa70714f
    lastState: {}
    name: hello-pod
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2017-06-14T08:36:42Z
  hostIP: 10.240.0.20
  phase: Running
  podIP: 10.128.0.17
  qosClass: BestEffort
  startTime: 2017-06-14T08:36:39Z

Comment 7 errata-xmlrpc 2017-10-25 13:02:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3049


Note You need to log in before you can comment on or make changes to this bug.