Bug 1955292 - Describe quota output should show units [NEEDINFO]
Summary: Describe quota output should show units
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.7
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.9.0
Assignee: Filip Krepinsky
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-29 19:46 UTC by Aditya Deshpande
Modified: 2021-10-18 17:30 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: oc describe quota/clusterresourcequota showed inconsistent units in used memory Consequence: harder readability and not predictable Fix: the used memory always uses the same prefix type (binary or decimal) as hard memory does Result: describe command shows predictable values
Clone Of:
Environment:
Last Closed: 2021-10-18 17:30:14 UTC
Target Upstream Version:
adeshpan: needinfo? (yinzhou)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubernetes kubernetes pull 102177 0 None closed kubectl: show consistent unit format in quota describe 2021-07-12 20:21:06 UTC
Github kubernetes kubernetes pull 102178 0 None open quota controller and admission: set consistent unit format in quota used 2021-05-20 14:27:55 UTC
Github openshift oc pull 882 0 None closed Bug 1955292: show consistent unit format in cluster resource quota describe 2021-07-13 17:20:49 UTC
Github openshift oc pull 890 0 None None None 2021-08-04 13:57:15 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:30:30 UTC

Description Aditya Deshpande 2021-04-29 19:46:44 UTC
Description of problem:
In deployment/deploymentConfig if the resources section is written with mixed units then the describe quota output is shown without any unit which is misleading. 

Suppose, there are multiple containers and respective resources section in deploymentConfig. 
~~~
    spec:
      containers:
[..]
        name: container-1

        resources:
          limits:
            cpu: 330m
            memory: 666Mi
          requests:
            cpu: 50m
            memory: 400M

[..]
        name: container-2

        resources:
          limits:
            cpu: 330m
            memory: 666Mi
          requests:
            cpu: 50m
            memory: 400Mi
[..]
        name: container-3

        resources:
          limits:
            cpu: 330m
            memory: 666Mi
          requests:
            cpu: 50m
            memory: 400Mi
~~~ 


The project has quota assigned where above kind of deploymentConfig is deployed. Describe quota output will be shown as below:

~~~
# oc describe quota my-quota
Name:       my-quota
Namespace:  tests
Scopes:     NotTerminating
 * Matches all pods that do not have an active deadline. These pods usually include long running pods whose container command is not expected to terminate.
Resource         Used        Hard
--------         ----        ----
limits.cpu       990m        4
limits.memory    1998Mi      16Gi
requests.cpu     150m        500m
requests.memory  1238860800  4Gi     <========= Highlighting the problematic section
~~~

After editing the one requests.memory from unit M to Mi to make it uniform in deploymentConfig, the describe output got changed and shown as below.
~~~
# oc describe quota my-quota
Name:       my-quota
Namespace:  tests
Scopes:     NotTerminating
 * Matches all pods that do not have an active deadline. These pods usually include long running pods whose container command is not expected to terminate.
Resource         Used    Hard
--------         ----    ----
limits.cpu       990m    4
limits.memory    1998Mi  16Gi
requests.cpu     150m    500m
requests.memory  1200Mi  4Gi
~~~

So, developers can use any units in resources section and such situations of having mixed units in dc or deployment shows describe quota output without any unit as highlighted above.

Version-Release number of selected component (if applicable):
OCP 4.7.6

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:
Proper unit is not displayed in describe quota output if mixed units for resources are used in deployment and deploymentConfig. 

Same thing can be observed in OCP console:(Administrator view)
Home page -> Administration -> ResourceQuotas -> Project-name -- check ResourceQuota Details.

Also, it is observed in Dev-console: (Developer View)
Dev console selecting project where quota is applied -> select project from left hand Nav bar -> Click on ResourceQuota -> check ResourceQuota Details

Expected results:
If the describe quota output is showing the value in bytes if mixed units for resources are used in deployment and deploymentConfig then it should be displayed as 'bytes' unit which is not seen in describe quota output. 

From all three points(1. command output 2. Administration console 3. Developer console) the unit should be displayed. 

Additional info:

Comment 2 Filip Krepinsky 2021-05-20 14:30:42 UTC
This cannot be fixed entirely since there are cases where both decimal and binary format could be used. We decided it will be the best to always use the unit format that is used by the ResourceQuota itself. So in your example that would be binary format and would actually show 1209825Ki

Comment 3 Filip Krepinsky 2021-05-20 14:32:39 UTC
I posted 2 PRs for the kube CLI and API. But there will need to be another PR for ClusterResourceQuota in openshift once the kube API PR gets merged.

Comment 4 Filip Krepinsky 2021-05-20 14:59:48 UTC
correction; another two PRs in openshift: one in openshift/oc and one in openshift/kubernetes

Comment 5 Filip Krepinsky 2021-07-12 20:23:23 UTC
the upstream PR merged and created a PR for ClusterResourceQuota

for the purpose of this bug we don't have to wait for the API fix, since it is an independent one and the CLI one should suffice

Comment 8 Filip Krepinsky 2021-07-14 09:03:49 UTC
It needs to have mixed units like the OP mentioned - 400M in first container and 400Mi in a second one

Comment 9 zhou ying 2021-07-15 06:51:00 UTC
we still could reproduce this issue : 
[root@localhost ~]# oc get quota 
NAME                AGE   REQUEST                                                           LIMIT
compute-resources   26m   pods: 1/4, requests.cpu: 100m/4, requests.memory: 819430400/8Gi   limits.cpu: 660m/4, limits.memory: 1332Mi/8Gi
[root@localhost ~]# oc describe  quota  compute-resources
Name:            compute-resources
Namespace:       zhouyt
Resource         Used       Hard
--------         ----       ----
limits.cpu       660m       4
limits.memory    1332Mi     8Gi
pods             1          4
requests.cpu     100m       4
requests.memory  819430400  8Gi
[root@localhost ~]# oc get clusterversion 
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2021-07-14-204159   True        False         6h15m   Cluster version is 4.9.0-0.nightly-2021-07-14-204159
[root@localhost ~]# oc version 
Client Version: 4.9.0-0.nightly-2021-07-14-204159
Server Version: 4.9.0-0.nightly-2021-07-14-204159


[root@localhost origin]# oc version --client  -o yaml 
clientVersion:
  buildDate: "2021-07-13T19:02:09Z"
  compiler: gc
  gitCommit: 25c20609d62e42c8e031b31ade5c2e6376e8334d
  gitTreeState: clean
  gitVersion: 4.9.0-202107131821.p0.git.25c2060.assembly.stream-25c2060
  goVersion: go1.16.4
  major: ""
  minor: ""
  platform: linux/amd64
releaseClientVersion: 4.9.0-0.nightly-2021-07-14-204159

Comment 11 Filip Krepinsky 2021-07-15 08:37:03 UTC
That PR is extra - so that the fix would work even on old clients.

But I forgot that we have to wait for kubernetes 1.22 release to bump the version in oc client. Will update once that is in.

One thing to note here that the fix is also applied for ClusterResourceQuota

Comment 12 Filip Krepinsky 2021-08-04 13:59:47 UTC
1.22 changes are in

Comment 14 zhou ying 2021-08-09 07:50:51 UTC
can't reproduce with latest oc client:

[root@localhost home]# oc describe quota compute-resources
Name:            compute-resources
Namespace:       test1
Resource         Used      Hard
--------         ----      ----
limits.cpu       660m      4
limits.memory    1332Mi    8Gi
pods             1         4
requests.cpu     100m      4
requests.memory  800225Ki  8Gi
[root@localhost home]# oc describe po/hello-openshift-4-qd56w
Name:         hello-openshift-4-qd56w
Namespace:    test1
Priority:     0
Node:         ip-10-0-128-199.us-east-2.compute.internal/10.0.128.199
Start Time:   Mon, 09 Aug 2021 15:48:19 +0800
Labels:       deployment=hello-openshift-4
              deploymentconfig=hello-openshift
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.129.2.119"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.129.2.119"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/deployment-config.latest-version: 4
              openshift.io/deployment-config.name: hello-openshift
              openshift.io/deployment.name: hello-openshift-4
              openshift.io/scc: restricted
Status:       Running
IP:           10.129.2.119
IPs:
  IP:           10.129.2.119
Controlled By:  ReplicationController/hello-openshift-4
Containers:
  hello-openshift:
    Container ID:   cri-o://01e480829a1b72d78a0fb5c5bc646d1c66263cc86228c2e13aa0929b8285fdcd
    Image:          quay.io/openshifttest/hello-openshift-centos@sha256:b9e19f1d8f25059bd4ee8bfd2ec1a24ab4ffe9767622132d1b991edc4d2e0d8a
    Image ID:       quay.io/openshifttest/hello-openshift-centos@sha256:b9e19f1d8f25059bd4ee8bfd2ec1a24ab4ffe9767622132d1b991edc4d2e0d8a
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 09 Aug 2021 15:48:21 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     330m
      memory:  666Mi
    Requests:
      cpu:        50m
      memory:     400M
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jn6dm (ro)
  hello-openshift2:
    Container ID:   cri-o://51fb1297eb0b80dc29fa48596838a1721551091e626ff93ec7f57e019148d4f2
    Image:          quay.io/openshifttest/hello-openshift-fedora@sha256:5895ec9bbe97f8ca124a723c51116c9c76c51e4ae421ff1c5634a93b0dd1d357
    Image ID:       quay.io/openshifttest/hello-openshift-fedora@sha256:5895ec9bbe97f8ca124a723c51116c9c76c51e4ae421ff1c5634a93b0dd1d357
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 09 Aug 2021 15:48:22 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     330m
      memory:  666Mi
    Requests:
      cpu:        50m
      memory:     400Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jn6dm (ro)

Comment 17 Filip Krepinsky 2021-08-20 07:53:46 UTC
Since this a usability issue and the impact of this bug is very low I think this will not get backported.

Comment 18 zhou ying 2021-09-02 12:24:35 UTC
reproduce the issue with latest oc :

[root@localhost tmp]# oc describe quota compute-resources
Name:            compute-resources
Namespace:       zhouy
Resource         Used       Hard
--------         ----       ----
limits.cpu       180m       4
limits.memory    396Mi      8Gi
pods             3          4
requests.cpu     120m       4
requests.memory  245829120  8Gi
[root@localhost tmp]# oc version 
Client Version: 4.9.0-202109020218.p0.git.96e95ce.assembly.stream-96e95ce
Server Version: 4.9.0-0.nightly-2021-08-31-123131
Kubernetes Version: v1.22.0-rc.0+1199c36

Comment 19 Filip Krepinsky 2021-09-02 12:52:03 UTC
Can you please describe what is the expected output?

hints:
- do the sum of eg. requests.memory in all the pods correspond to the used requests.memory in the quota? Or are some other values off?
- 245829120 is not divisible by 1024 so bytes (no unit) is the highest unit that can be used

Comment 20 Aditya Deshpande 2021-09-13 01:16:08 UTC
Can we round off the value in case it is not fully divisible by 1024? 
If No then can we mention about this situation of units displayed in output in our documentation of quota?

Comment 21 Filip Krepinsky 2021-09-13 11:00:52 UTC
I do not think that is a good idea to round it since it would not be correct value and incompatible change.

In my opinion it is not necessary to document this, since this is not an abnormal behaviour and complies with kubernetes handling of units/values.

Comment 22 Aditya Deshpande 2021-09-14 02:48:03 UTC
Do you want to set it again as ON_QA as current status is Assigned?

Comment 23 Filip Krepinsky 2021-09-14 09:17:17 UTC
I suppose we can, please assign this bug again if you feel we have forgotten something.

Comment 25 zhou ying 2021-09-15 09:50:21 UTC
Agree with round off the value with unit like; requests.memory  800M , this will be more readable.

Comment 26 Filip Krepinsky 2021-09-15 10:06:54 UTC
as I mentioned I do not think this is a good idea

@maszulik@redhat.com could you please give your opinion on this?

Comment 27 Maciej Szulik 2021-09-20 13:56:33 UTC
(In reply to Filip Krepinsky from comment #26)
> as I mentioned I do not think this is a good idea
> 
> @maszulik@redhat.com could you please give your opinion on this?

No rounding, leave it as is now.

Comment 33 Filip Krepinsky 2021-09-27 11:52:26 UTC
updated the doc text

Comment 35 errata-xmlrpc 2021-10-18 17:30:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.