Description of problem: Create resources exceed cluster quota Hard limit, there is no CLI warning, meanwhile, resources still can be created and counted. Version-Release number of selected component (if applicable): openshift v1.3.0-alpha.2+89b7193 kubernetes v1.3.0+507d3a7 etcd 2.3.0+git How reproducible: Always Steps to Reproduce: 1. 1. Create 2 projects # oc new-project project-a # oc new-project project-b 2. Label projects # oc label namespace project-a user=dev --config=./admin.kubeconfig # oc label namespace project-c user=qe --config=./admin.kubeconfig 3. Create a clusterquota with label selector "user=dev" # oc create clusterresourcequota crq --project-label-selector=user=dev --hard=pods=10 --hard=services=15 --hard secrets=10 --config=./admin.kubeconfig 4. Check clusterquota via CLI and web console # oc describe clusterresourcequota crq --config=./admin.kubeconfig 5. Create a secret in project-a and check secrets via CLI and web console # oc secrets new mysecret-1 /root/.ssh/xxx 6. Create a secret in project-a and check secrets via CLI and web console again # oc secrets new mysecret-2 /root/.ssh/xxx Actual results: 4. [root@dhcp-141-95 qwang]# oc describe clusterresourcequota crq --config=./admin.kubeconfigName: crq Namespace: <none> Created: About an hour ago Labels: <none> Annotations: <none> Label Selector: user=dev AnnotationSelector: map[] Resource Used Hard -------- ---- ---- pods 0 10 secrets 9 10 services 0 15 5. [root@dhcp-141-95 qwang]# oc describe clusterresourcequota crq --config=./admin.kubeconfigName: crq Namespace: <none> Created: About an hour ago Labels: <none> Annotations: <none> Label Selector: user=dev AnnotationSelector: map[] Resource Used Hard -------- ---- ---- pods 0 10 secrets 10 10 services 0 15 [root@dhcp-141-95 qwang]# oc get secrets NAME TYPE DATA AGE builder-dockercfg-dolu0 kubernetes.io/dockercfg 1 57m builder-token-rfssn kubernetes.io/service-account-token 3 57m builder-token-tjej4 kubernetes.io/service-account-token 3 57m default-dockercfg-uotsj kubernetes.io/dockercfg 1 57m default-token-ks826 kubernetes.io/service-account-token 3 57m default-token-y6qou kubernetes.io/service-account-token 3 57m deployer-dockercfg-sibgt kubernetes.io/dockercfg 1 57m deployer-token-1i3rt kubernetes.io/service-account-token 3 57m deployer-token-bjljm kubernetes.io/service-account-token 3 57m mysecret-1 Opaque 1 31m Here are 9 secrets by default. When secrets account reaches Hard=10, a warning "Quota limit reached"shows in web console 6. The 11th secret create without any CLI warning [root@dhcp-141-95 qwang]# oc secrets new mysecret-2 /root/.ssh/xxx secret/mysecret-2 [root@dhcp-141-95 qwang]# oc describe clusterresourcequota crq --config=./admin.kubeconfigName: crq Namespace: <none> Created: About an hour ago Labels: <none> Annotations: <none> Label Selector: user=dev AnnotationSelector: map[] Resource Used Hard -------- ---- ---- pods 0 10 secrets 11 10 services 0 15 Expected results: It should warn that Quota limit reached and prevent further creation. Additional info:
Created attachment 1187825 [details] Exceeded quota
Update: The problem is in OSE(openshift v3.3.0.14, kubernetes v1.3.0+57fb9ac, etcd 2.3.0+git) On Origin(openshift v1.3.0-alpha.2+89b7193, kubernetes v1.3.0+507d3a7, etcd 2.3.0+git), the problem can't be reproduced. Origin has the correct warning: Error from server: secrets "mysecret-2" is forbidden: Exceeded quota: crq, requested: secrets=1, used: secrets=10, limited: secrets=10
Are you running the OSE from config? If so, can you provide the config? It's possible to specify a different set of admission plugins and that can prevent new ones from taking affect.
This problem can't be reproduced in non-HA environment but exist in HA (2master+2infra_node+2node+3etcd). Attached master-config.yaml
Created attachment 1188618 [details] master config
Ok, I suspect that you're using a different master-config.yaml in your HA and non-HA configuration. In the one you linked, you're specifying: ```yaml admissionConfig: pluginOrderOverride: - NamespaceLifecycle - OriginPodNodeEnvironment - LimitRanger - ServiceAccount - SecurityContextConstraint - BuildDefaults - BuildOverrides - ResourceQuota - SCCExecRestrictions - AlwaysPullImages ``` That takes control of the admission chain. You should be getting a warning like this in your log, "specified admission ordering is being phased out". Because its being specified, you don't get new admission plugins including "ClusterResourceQuota". You can add "ClusterResourceQuota", but you really shouldn't be specifying the chain at all. Did you have to do it for some reason? Was it set up that way automatically?
Yes QE's testing environment is setup by jenkins. There are "openshift_master_kube_admission_plugin_order" and "openshift_master_kube_admission_plugin_config" in "openshift_ansible_vars" options of HA environment config template but not in Non-HA config template. Part of Jenkins log which is setup HA job: #The following parameters is used by openshift-ansible openshift_master_kube_admission_plugin_order=["NamespaceLifecycle","OriginPodNodeEnvironment","LimitRanger","ServiceAccount","SecurityContextConstraint","BuildDefaults","BuildOverrides","ResourceQuota","SCCExecRestrictions","AlwaysPullImages"] openshift_master_kube_admission_plugin_config={"RunOnceDuration":{"configuration":{"apiVersion":"v1","kind":"RunOnceDurationConfig","activeDeadlineSecondsOverride":"3600"}},"ClusterResourceOverride":{"configuration":{"apiVersion":"v1","kind":"ClusterResourceOverrideConfig","limitCPUToMemoryPercent":"200","cpuRequestToLimitPercent":"6","memoryRequestToLimitPercent":"60"}},"PodNodeConstraints":{"configuration":{"apiVersion":"v1","kind":"PodNodeConstraintsConfig"}},"BuildOverrides":{"configuration":{"apiVersion":"v1","kind":"BuildOverridesConfig","forcePull":True}}} Capture master log: Aug 09 02:56:16 ip-172-18-14-59.ec2.internal atomic-openshift-master-controllers[13217]: W0809 02:56:16.506089 13217 start_master.go:272] kubernetesMasterConfig.admissionConfig.pluginOrderOverride: Invalid value: ["NamespaceLifecycle","OriginPodNodeEnvironment","LimitRanger","ServiceAccount","SecurityContextConstraint","BuildDefaults","BuildOverrides","ResourceQuota","SCCExecRestrictions","AlwaysPullImages"]: specified admission ordering is being phased out. Convert to DefaultAdmissionConfig in admissionConfig.pluginConfig. I think perhaps QE wrote incomplete ansible variables. The log shows "Convert to DefaultAdmissionConfig in admissionConfig.pluginConfig". The "DefaultAdmissionConfig" should be the same with Non-HA master-config, but these above admission plugins are still added into master-config. Doesn't "convertion" happen? HA: kubernetesMasterConfig: admissionConfig: pluginOrderOverride: - NamespaceLifecycle - OriginPodNodeEnvironment - LimitRanger - ServiceAccount - SecurityContextConstraint - BuildDefaults - BuildOverrides - ResourceQuota - SCCExecRestrictions - AlwaysPullImages pluginConfig: BuildOverrides: configuration: apiVersion: v1 forcePull: true kind: BuildOverridesConfig ClusterResourceOverride: configuration: apiVersion: v1 cpuRequestToLimitPercent: '6' kind: ClusterResourceOverrideConfig limitCPUToMemoryPercent: '200' memoryRequestToLimitPercent: '60' PodNodeConstraints: configuration: apiVersion: v1 kind: PodNodeConstraintsConfig RunOnceDuration: configuration: activeDeadlineSecondsOverride: '3600' apiVersion: v1 kind: RunOnceDurationConfig Non-HA: kubernetesMasterConfig: admissionConfig: pluginConfig: {} Attached files, hope these help.
Created attachment 1189189 [details] ha-master-config.yaml
Created attachment 1189190 [details] non-ha-master-config.yaml
Created attachment 1189192 [details] ha-atomic-openshift-master-controllers.log
Created attachment 1189193 [details] ha-atomic-openshift-master-api.log
> "DefaultAdmissionConfig" should be the same with Non-HA master-config, but > these above admission plugins are still added into master-config. Sorry, please ignore "these above admission plugins are still added into master-config". I mean since it's an invalid configuration, the behavior should be the same with "DefaultAdmissionConfig", but it seems not convert to DefaultAdmissionConfig.
@Scott: are we encouraging people to set these admission values? @Qixuan Wang: You need to either add `ClusterResourceQuota` to the bottom of your list or you need to stop specifying the values. The current configuration is saying to *NOT* run the admission plugin that enforces quota.
(In reply to Qixuan Wang from comment #7) > Yes QE's testing environment is setup by jenkins. There are > "openshift_master_kube_admission_plugin_order" and > "openshift_master_kube_admission_plugin_config" in "openshift_ansible_vars" > options of HA environment config template but not in Non-HA config template. Ok, that's an installer bug we should fix. (In reply to David Eads from comment #13) > @Scott: are we encouraging people to set these admission values? Encourage no, but we enable them to set admission plugin config. If they're shooting themselves not much we can do about that.
@scott: I want to remove that knob from the master-config in two releases. What does it take to get there from here in ansible? We're combining the admission chains and we're providing a different on/off mechanism.
(In reply to David Eads from comment #15) > @scott: I want to remove that knob from the master-config in two releases. > What does it take to get there from here in ansible? > > We're combining the admission chains and we're providing a different on/off > mechanism. When the time comes, file an issue in openshift-ansible and link it to the origin PR that drops it from the config.
The ClusterResourceQuota admission plugin needs to be enabled. This can be done by adding to the list or by not specifying the list. Not specifying is preferred.
Adding "ClusterResourceQuota" instead of "ResourceQuota" can get expected result. Thanks.