Hide Forgot
Description of problem: Adding view role to service account and then taking it back takes no effect, container is still able to view openshift resources Version: openshift v3.3.1.3 kubernetes v1.3.0+52492b4 How reproducible: Every time Steps to Reproduce: 1. Deploy any pod using default service account in testnamespace 2. run: oc policy add-role-to-user view system:serviceaccount:testnamespace:default -n testnamespace 3. run in container terminal: bearerHeader="Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" url="https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/api/v1/namespaces/testnamespace/pods" curl -G -k -H "${bearerHeader}" ${url} 4. see getting pods in your namespace 5. run: oc policy add-role-to-user view system:serviceaccount:testnamespace:default -n testnamespace 6. run again in same container terminal: curl -G -k -H "${bearerHeader}" ${url} Actual results: Getting pods in namespace Expected results: Should get 403 unauthorized Additional info: Works properly in version: oc v3.3.0.35 kubernetes v1.3.0+52492b4
was step 5 supposed to be "oc policy remove-role-from-user view system:serviceaccount:testnamespace:default -n testnamespace"? When I remove the role, I get the forbidden error as expected.
Yes, step 5 was meant to remove the role. Retried again and noted that default service account is able to get resources even without adding view role (it doesn't have the role). Might it be configured somewhere, that default service account is authorized to see resources no matter if it has the appropriate role?
By default, that service account has no API permissions. Do you have a custom project template set up? Have you granted any cluster-wide permissions? What does the following show: oc get rolebindings -n testnamespace -o yaml oc get clusterrolebindings -o yaml
It showed up that into our cluster role bindings got policy cluster-reader for group system:serviceaccounts. After removing this policy everything works as expected.