Description of problem: Pod.yaml with invalid 'securityContext' field like `"securityContext": {"capabilities": {"add":["KILLtest"]}}` can create successfully in cri-o env. Version-Release number of selected component (if applicable): # openshift version openshift v3.11.0-0.9.0 # crio -v crio version 1.11.1 commit: "96828874a5891219d5ae239f82bc5f6669454c4f-dirty" How reproducible: Always Steps to Reproduce: 1.create a scc allowedCapabilities: - FSETID - KILLtest runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny apiVersion: v1 groups: - system:serviceaccounts:{YourProjectName} kind: SecurityContextConstraints metadata: labels: name: scc-cap name: scc-cap 2.create a pod pod.json { "kind": "Pod", "apiVersion":"v1", "metadata": { "name": "pod-add-chown", "labels": { "name": "pod-add-chown" } }, "spec": { "containers": [{ "name": "pod-add-chown", "image": "bmeng/hello-openshift", "securityContext": { "capabilities": {"add":["KILLtest"]} } }] } } Actual results: no warning,pod and container can create successfully. Expected results: When oc describe pod should show such as: Warning Failed Error: failed to start container "pod-add-chown": Error response from daemon: linux spec capabilities: Unknown capability to add: "CAP_KILLtest" Additional info:
Fix here https://github.com/kubernetes-incubator/cri-o/pull/1707
hi, i see `crio version 1.11.1` has that https://github.com/kubernetes-incubator/cri-o/pull/1707 commit,but this bug still occurs in `crio version 1.11.1`,thanks.
The fix will be in cri-o 1.11.2. We missed that commit in 1.11.1.
Move back to modified since current still cri-o-1.11.1-2.rhaos3.11.git1759204.el7.x86_64
Frantisek, please build a cri-o 1.11.2
so i got this built: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=18076851 PTAL
it works with 1.11.2 on my system. I get: ``` NAME READY STATUS RESTARTS AGE capbug1 0/1 CreateContainerError 0 6m capbug2 0/1 CreateContainerError 0 3m capbug3 0/1 CreateContainerError 0 3m capbug4 0/1 CreateContainerError 0 11s ``` ``` Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8s default-scheduler Successfully assigned default/capbug4 to 127.0.0.1 Normal SandboxChanged 7s kubelet, 127.0.0.1 Pod sandbox changed, it will be killed and re-created. Normal Pulled 5s (x4 over 8s) kubelet, 127.0.0.1 Container image "gcr.io/google-samples/node-hello:1.0" already present on machine Warning Failed 5s (x4 over 8s) kubelet, 127.0.0.1 Error: unknown capability "CAP_KILLTEST" to add ```
``` k get pod capbug1 -o yaml | grep -i -n3 capabilities 14- name: capbug 15- resources: {} 16- securityContext: 17: capabilities: 18- add: 19- - KILLtest 20- - SYS_TIME ```
openshift + crio fails as well correctly: cri-o-1.11.2-1.rhaos3.11.git3eac3b2.el7.x86_64 ``` Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 38s default-scheduler Successfully assigned default/capbug4 to runcomtest-ig-n-7w5w Normal Pulling 36s kubelet, runcomtest-ig-n-7w5w pulling image "gcr.io/google-samples/node-hello:1.0" Normal Pulled 7s kubelet, runcomtest-ig-n-7w5w Successfully pulled image "gcr.io/google-samples/node-hello:1.0" Normal SandboxChanged 3s (x2 over 6s) kubelet, runcomtest-ig-n-7w5w Pod sandbox changed, it will be killed and re-created. Warning Failed 1s (x3 over 7s) kubelet, runcomtest-ig-n-7w5w Error: unknown capability "CAP_KILLTEST" to add Normal Pulled 1s (x2 over 4s) kubelet, runcomtest-ig-n-7w5w Container image "gcr.io/google-samples/node-hello:1.0" already present on machine 20:42:24 [release/cluster/test-deploy] ‹master*› oc get pods NAME READY STATUS RESTARTS AGE capbug4 0/1 CreateContainerError 0 53s docker-registry-1-dhggb 1/1 Running 0 6m registry-console-1-9t9hs 1/1 Running 0 6m ``` please re-test it as I cannot reproduce with the very same package you're using
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2652