Bug 1508027 - Cannot annotate bare pod using service account
Summary: Cannot annotate bare pod using service account
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Security
Version: 3.6.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Eric Paris
QA Contact: Xiaoli Tian
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-31 17:31 UTC by Mike Cohen
Modified: 2018-04-02 13:31 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-31 18:48:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Simple test program (1.34 KB, text/plain)
2017-10-31 17:31 UTC, Mike Cohen
no flags Details

Description Mike Cohen 2017-10-31 17:31:17 UTC
Created attachment 1346016 [details]
Simple test program

Description of problem:
If you create a pod directly and not through a deployment, etc. then the pod cannot be annotated using a service account.  You _can_ annotate the pod using the regular cluster admin account.  But even assigning cluster-admin permissions to the service account you still cannot annotate the pod.  

I'm attaching a simple test program that gets a pod and adds an annotation to the pod's metadata, similar to what you might get if you ran kubectl annotate.

Then, create a service account such as:
kind: Namespace
metadata:
  name: testns
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: testserviceaccount
  namespace: testns
---
apiVersion: v1
kind: ClusterRole
metadata:
  name: testpodupdate
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - update
---
apiVersion: v1
kind: ClusterRoleBinding
metadata:
  name: testserviceaccountbinding
  labels:
    aci-containers-config-version: "24df5786-7b90-419f-8795-7bc468436c97"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: testpodupdate
subjects:
- kind: ServiceAccount
  name: testserviceaccount
  namespace: testns


If you run this program on a pod created with a deployment, then this program will succeed using the service account.  
$ POD_NAME=simpleservice-1882099781-3t1hp POD_NS=testdep POD_ANNOTATION=value2 /tmp/testkube
Wrote annotation test-annotation  value2

If you run this program on a pod created as a bare pod using the same service account, then this program fails:
$ POD_NAME=simpleservice-pod POD_NS=test POD_ANNOTATION=value2 /tmp/testkube 
panic: Pod "simpleservice-pod" is invalid: spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)

goroutine 1 [running]:
main.main()
	/home/readams/work/go/src/github.com/noironetworks/testkube/main.go:59 +0x5d4

Note that this program is NOT changing anything in the spec (only changing the metadata), so this message is simply wrong.

If you run it using the admin kubeconfig then it will succeed:
# POD_NAME=simpleservice-pod POD_NS=test POD_ANNOTATION=admin-account ./testkube
Could not create in-cluster config
Using kubeconfig
Wrote annotation test-annotation  admin-account

I see the same behavior if I use the patch method rather than the update method, but this way makes the example much simpler.  I've tried assigning cluster-admin privileges to this service account but this does not seem to have an effect.

In upstream Kubernetes, this works fine so this is somehow specific to OpenShift.

This is actually blocking our ability to deploy onto OpenShift since our networking plugin depends on the ability to apply annotations to pods from a controller.

Comment 1 Jordan Liggitt 2017-10-31 18:46:56 UTC
this is a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1383707 caused by SecurityContextConstraints modifying the pod spec during update because the service account does not have access to the same policies, and the stricter policies it does have access to require tighter defaults set on the pod spec

Comment 2 Seth Jennings 2017-10-31 18:48:43 UTC

*** This bug has been marked as a duplicate of bug 1383707 ***

Comment 3 Mike Cohen 2017-10-31 19:17:20 UTC
If that is the explanation, then why does this also occur if the service account has anyuid permissions?  Is there a permission or something I can add to the service account that allows the annotation?  How would I do that?

Comment 4 Seth Jennings 2017-10-31 19:24:46 UTC
Sending to Security to answer question in comment 3

Comment 5 Mike Cohen 2017-10-31 21:53:37 UTC
On the last comment I misspoke.  I meant to say why does this occur if the service account has "privileged" permissions.  I CAN make it work if I specifically add "anyuid" to the service account.  But why would updating the pod with an account with _more_ permissions than anyuid trigger this issue?  I've also tried creating an scc with every possible permission, and this ALSO does not allow the annotation to go through.  Why would the security context constraint mutate the pod write in this way when its context should allow any possible configuration?  Does anyuid have special hidden semantics?

Basically I'm looking for a way to set up a service account so it will always be able to apply an annotation no matter how the pod was created.  Is it sufficient to add anyuid or would this fail if the user created the pod using a different constraint?

# oc get scc test-scc -o yaml
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegedContainer: true
allowedCapabilities:
- '*'
apiVersion: v1
defaultAddCapabilities: []
fsGroup:
  type: RunAsAny
kind: SecurityContextConstraints
metadata:
  creationTimestamp: 2017-10-31T21:50:44Z
  name: test-scc
  resourceVersion: "317963"
  selfLink: /api/v1/securitycontextconstraints/test-scc
  uid: 8bf617bd-be85-11e7-a719-84b261c2790e
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities: []
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
seccompProfiles:
- '*'
supplementalGroups:
  type: RunAsAny
users:
- system:serviceaccount:testns:testserviceaccount
volumes:
- '*'

Comment 6 Mike Cohen 2017-10-31 23:12:39 UTC
I've discovered that I can get it to work if I set a high priority for my scc policy.  So presumably what happens is that, if anyuid is not set on the user, there is only a list of scc policies that have no priority set.

According to:
https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#admission

When the complete set of available SCCs are determined they are ordered by:

    Highest priority first, nil is considered a 0 priority

    If priorities are equal, the SCCs will be sorted from most restrictive to least restrictive

    If both priorities and restrictions are equal the SCCs will be sorted by name


Here, step (2) is selecting the worst possible SCC (note that sorting most restrictive to least restrictive is impossible since the set is only partially ordered), even though a constraint is available that would allow the pod update to occur.

So I guess the solution here is that I need to run the controller pod with an SCC that has every possible permission and has the priority set higher than any default policy.

This seems like a pretty sad result but I guess that's what it is.


Note You need to log in before you can comment on or make changes to this bug.