Description of problem: In deployment/deploymentConfig if the resources section is written with mixed units then the describe quota output is shown without any unit which is misleading. Suppose, there are multiple containers and respective resources section in deploymentConfig. ~~~ spec: containers: [..] name: container-1 resources: limits: cpu: 330m memory: 666Mi requests: cpu: 50m memory: 400M [..] name: container-2 resources: limits: cpu: 330m memory: 666Mi requests: cpu: 50m memory: 400Mi [..] name: container-3 resources: limits: cpu: 330m memory: 666Mi requests: cpu: 50m memory: 400Mi ~~~ The project has quota assigned where above kind of deploymentConfig is deployed. Describe quota output will be shown as below: ~~~ # oc describe quota my-quota Name: my-quota Namespace: tests Scopes: NotTerminating * Matches all pods that do not have an active deadline. These pods usually include long running pods whose container command is not expected to terminate. Resource Used Hard -------- ---- ---- limits.cpu 990m 4 limits.memory 1998Mi 16Gi requests.cpu 150m 500m requests.memory 1238860800 4Gi <========= Highlighting the problematic section ~~~ After editing the one requests.memory from unit M to Mi to make it uniform in deploymentConfig, the describe output got changed and shown as below. ~~~ # oc describe quota my-quota Name: my-quota Namespace: tests Scopes: NotTerminating * Matches all pods that do not have an active deadline. These pods usually include long running pods whose container command is not expected to terminate. Resource Used Hard -------- ---- ---- limits.cpu 990m 4 limits.memory 1998Mi 16Gi requests.cpu 150m 500m requests.memory 1200Mi 4Gi ~~~ So, developers can use any units in resources section and such situations of having mixed units in dc or deployment shows describe quota output without any unit as highlighted above. Version-Release number of selected component (if applicable): OCP 4.7.6 How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: Proper unit is not displayed in describe quota output if mixed units for resources are used in deployment and deploymentConfig. Same thing can be observed in OCP console:(Administrator view) Home page -> Administration -> ResourceQuotas -> Project-name -- check ResourceQuota Details. Also, it is observed in Dev-console: (Developer View) Dev console selecting project where quota is applied -> select project from left hand Nav bar -> Click on ResourceQuota -> check ResourceQuota Details Expected results: If the describe quota output is showing the value in bytes if mixed units for resources are used in deployment and deploymentConfig then it should be displayed as 'bytes' unit which is not seen in describe quota output. From all three points(1. command output 2. Administration console 3. Developer console) the unit should be displayed. Additional info:
This cannot be fixed entirely since there are cases where both decimal and binary format could be used. We decided it will be the best to always use the unit format that is used by the ResourceQuota itself. So in your example that would be binary format and would actually show 1209825Ki
I posted 2 PRs for the kube CLI and API. But there will need to be another PR for ClusterResourceQuota in openshift once the kube API PR gets merged.
correction; another two PRs in openshift: one in openshift/oc and one in openshift/kubernetes
the upstream PR merged and created a PR for ClusterResourceQuota for the purpose of this bug we don't have to wait for the API fix, since it is an independent one and the CLI one should suffice
It needs to have mixed units like the OP mentioned - 400M in first container and 400Mi in a second one
we still could reproduce this issue : [root@localhost ~]# oc get quota NAME AGE REQUEST LIMIT compute-resources 26m pods: 1/4, requests.cpu: 100m/4, requests.memory: 819430400/8Gi limits.cpu: 660m/4, limits.memory: 1332Mi/8Gi [root@localhost ~]# oc describe quota compute-resources Name: compute-resources Namespace: zhouyt Resource Used Hard -------- ---- ---- limits.cpu 660m 4 limits.memory 1332Mi 8Gi pods 1 4 requests.cpu 100m 4 requests.memory 819430400 8Gi [root@localhost ~]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.0-0.nightly-2021-07-14-204159 True False 6h15m Cluster version is 4.9.0-0.nightly-2021-07-14-204159 [root@localhost ~]# oc version Client Version: 4.9.0-0.nightly-2021-07-14-204159 Server Version: 4.9.0-0.nightly-2021-07-14-204159 [root@localhost origin]# oc version --client -o yaml clientVersion: buildDate: "2021-07-13T19:02:09Z" compiler: gc gitCommit: 25c20609d62e42c8e031b31ade5c2e6376e8334d gitTreeState: clean gitVersion: 4.9.0-202107131821.p0.git.25c2060.assembly.stream-25c2060 goVersion: go1.16.4 major: "" minor: "" platform: linux/amd64 releaseClientVersion: 4.9.0-0.nightly-2021-07-14-204159
That PR is extra - so that the fix would work even on old clients. But I forgot that we have to wait for kubernetes 1.22 release to bump the version in oc client. Will update once that is in. One thing to note here that the fix is also applied for ClusterResourceQuota
1.22 changes are in
can't reproduce with latest oc client: [root@localhost home]# oc describe quota compute-resources Name: compute-resources Namespace: test1 Resource Used Hard -------- ---- ---- limits.cpu 660m 4 limits.memory 1332Mi 8Gi pods 1 4 requests.cpu 100m 4 requests.memory 800225Ki 8Gi [root@localhost home]# oc describe po/hello-openshift-4-qd56w Name: hello-openshift-4-qd56w Namespace: test1 Priority: 0 Node: ip-10-0-128-199.us-east-2.compute.internal/10.0.128.199 Start Time: Mon, 09 Aug 2021 15:48:19 +0800 Labels: deployment=hello-openshift-4 deploymentconfig=hello-openshift Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.129.2.119" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.129.2.119" ], "default": true, "dns": {} }] openshift.io/deployment-config.latest-version: 4 openshift.io/deployment-config.name: hello-openshift openshift.io/deployment.name: hello-openshift-4 openshift.io/scc: restricted Status: Running IP: 10.129.2.119 IPs: IP: 10.129.2.119 Controlled By: ReplicationController/hello-openshift-4 Containers: hello-openshift: Container ID: cri-o://01e480829a1b72d78a0fb5c5bc646d1c66263cc86228c2e13aa0929b8285fdcd Image: quay.io/openshifttest/hello-openshift-centos@sha256:b9e19f1d8f25059bd4ee8bfd2ec1a24ab4ffe9767622132d1b991edc4d2e0d8a Image ID: quay.io/openshifttest/hello-openshift-centos@sha256:b9e19f1d8f25059bd4ee8bfd2ec1a24ab4ffe9767622132d1b991edc4d2e0d8a Port: <none> Host Port: <none> State: Running Started: Mon, 09 Aug 2021 15:48:21 +0800 Ready: True Restart Count: 0 Limits: cpu: 330m memory: 666Mi Requests: cpu: 50m memory: 400M Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jn6dm (ro) hello-openshift2: Container ID: cri-o://51fb1297eb0b80dc29fa48596838a1721551091e626ff93ec7f57e019148d4f2 Image: quay.io/openshifttest/hello-openshift-fedora@sha256:5895ec9bbe97f8ca124a723c51116c9c76c51e4ae421ff1c5634a93b0dd1d357 Image ID: quay.io/openshifttest/hello-openshift-fedora@sha256:5895ec9bbe97f8ca124a723c51116c9c76c51e4ae421ff1c5634a93b0dd1d357 Port: <none> Host Port: <none> State: Running Started: Mon, 09 Aug 2021 15:48:22 +0800 Ready: True Restart Count: 0 Limits: cpu: 330m memory: 666Mi Requests: cpu: 50m memory: 400Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jn6dm (ro)
Since this a usability issue and the impact of this bug is very low I think this will not get backported.
reproduce the issue with latest oc : [root@localhost tmp]# oc describe quota compute-resources Name: compute-resources Namespace: zhouy Resource Used Hard -------- ---- ---- limits.cpu 180m 4 limits.memory 396Mi 8Gi pods 3 4 requests.cpu 120m 4 requests.memory 245829120 8Gi [root@localhost tmp]# oc version Client Version: 4.9.0-202109020218.p0.git.96e95ce.assembly.stream-96e95ce Server Version: 4.9.0-0.nightly-2021-08-31-123131 Kubernetes Version: v1.22.0-rc.0+1199c36
Can you please describe what is the expected output? hints: - do the sum of eg. requests.memory in all the pods correspond to the used requests.memory in the quota? Or are some other values off? - 245829120 is not divisible by 1024 so bytes (no unit) is the highest unit that can be used
Can we round off the value in case it is not fully divisible by 1024? If No then can we mention about this situation of units displayed in output in our documentation of quota?
I do not think that is a good idea to round it since it would not be correct value and incompatible change. In my opinion it is not necessary to document this, since this is not an abnormal behaviour and complies with kubernetes handling of units/values.
Do you want to set it again as ON_QA as current status is Assigned?
I suppose we can, please assign this bug again if you feel we have forgotten something.
Agree with round off the value with unit like; requests.memory 800M , this will be more readable.
as I mentioned I do not think this is a good idea @maszulik could you please give your opinion on this?
(In reply to Filip Krepinsky from comment #26) > as I mentioned I do not think this is a good idea > > @maszulik could you please give your opinion on this? No rounding, leave it as is now.
updated the doc text
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759
When divisible by 1024, will show like 800225Ki