RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2066145 - The results showed significant difference between with and without --no-stream option for podman stats
Summary: The results showed significant difference between with and without --no-strea...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.6
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Jindrich Novy
QA Contact: Alex Jia
URL:
Whiteboard:
Depends On:
Blocks: 2210138 2210139 2210140
TreeView+ depends on / blocked
 
Reported: 2022-03-21 04:23 UTC by Alex Jia
Modified: 2023-05-25 20:46 UTC (History)
11 users (show)

Fixed In Version: podman-4.1.0-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2210138 2210139 2210140 (view as bug list)
Environment:
Last Closed: 2022-11-08 09:15:10 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-116192 0 None None None 2022-03-21 04:26:02 UTC
Red Hat Product Errata RHSA-2022:7457 0 None None None 2022-11-08 09:16:16 UTC

Description Alex Jia 2022-03-21 04:23:34 UTC
Description of problem:
The results showed significant difference (more than 10 times for CPU%) between with and without --no-stream option for pomdna stats. 

Version-Release number of selected component (if applicable):
[root@sweetpig-20 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.6 Beta (Ootpa)

[root@sweetpig-20 ~]# rpm -q podman runc systemd kernel
podman-4.0.2-2.module+el8.6.0+14488+6524fb7f.x86_64
runc-1.0.3-2.module+el8.6.0+14488+6524fb7f.x86_64
systemd-239-58.el8.x86_64
kernel-4.18.0-372.2.1.el8.x86_64

[root@sweetpig-20 ~]# grep cgroup /proc/mounts
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,size=3210160k,nr_inodes=802540,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,seclabel,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,seclabel,nosuid,nodev,noexec,relatime,rdma 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,seclabel,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,seclabel,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,seclabel,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,seclabel,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,seclabel,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,seclabel,nosuid,nodev,noexec,relatime,freezer 0 0

How reproducible:
always

Steps to Reproduce:
1. # podman run -it --rm --cpus 1.0 quay.io/libpod/alpine sh -c "dd if=/dev/zero of=/dev/null"
2. open 2 new terminal windows then run the following command line at the same time

terminal window 1
# podman stats --no-reset

terminal window 2
# podman stats --no-reset --no-stream (ran multi-times)

Actual results:

[root@sweetpig-20 ~]# podman run -it --rm --cpus 1.0 quay.io/libpod/alpine sh -c "dd if=/dev/zero of=/dev/null"
Trying to pull quay.io/libpod/alpine:latest...
Getting image source signatures
Copying blob 9d16cba9fb96 done
Copying config 9617696764 done
Writing manifest to image destination
Storing signatures

terminal window 1

[root@sweetpig-20 ~]# podman stats --no-reset
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  1.14%       282.6kB / 6.574GB  0.00%       698B / 1.274kB  -- / --     1           18.759893958s  1.14%
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  94.31%      282.6kB / 6.574GB  0.00%       698B / 1.274kB  -- / --     1           23.492783608s  47.16%
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  97.55%      282.6kB / 6.574GB  0.00%       768B / 1.344kB  -- / --     1           28.380240529s  63.95%
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  97.01%      282.6kB / 6.574GB  0.00%       768B / 1.344kB  -- / --     1           33.235709607s  72.22%
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  94.63%      282.6kB / 6.574GB  0.00%       768B / 1.344kB  -- / --     1           37.977983835s  76.70%

terminal window 2

[root@sweetpig-20 ~]# podman stats --no-reset --no-stream
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  1.29%       282.6kB / 6.574GB  0.00%       698B / 1.274kB  -- / --     1           21.325221865s  1.29%
[root@sweetpig-20 ~]# podman stats --no-reset --no-stream
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  2.22%       282.6kB / 6.574GB  0.00%       768B / 1.344kB  -- / --     1           36.613540323s  2.22%
[root@sweetpig-20 ~]# podman stats --no-reset --no-stream
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  2.33%       282.6kB / 6.574GB  0.00%       768B / 1.344kB  -- / --     1           38.338701519s  2.33%
[root@sweetpig-20 ~]# podman stats --no-reset --no-stream
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  2.41%       282.6kB / 6.574GB  0.00%       768B / 1.344kB  -- / --     1           39.716167803s  2.41%
[root@sweetpig-20 ~]# podman stats --no-reset --no-stream
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  2.50%       282.6kB / 6.574GB  0.00%       768B / 1.344kB  -- / --     1           41.241137177s  2.50%

Expected results:
fix it.

Additional info:

Other questions as follows

1. Is it an expected result from 1.14% jump to 94.31%? 

[root@sweetpig-20 ~]# podman stats --no-reset
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  1.14%       282.6kB / 6.574GB  0.00%       698B / 1.274kB  -- / --     1           18.759893958s  1.14%
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  94.31%      282.6kB / 6.574GB  0.00%       698B / 1.274kB  -- / --     1           23.492783608s  47.16%
ID            NAME                 CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
68e759c90c62  eloquent_cartwright  97.55%      282.6kB / 6.574GB  0.00%       768B / 1.344kB  -- / --     1           28.380240529s  63.95%

2. sometimes, will get '--' value for CPU% w/ --no-stream option, is it an accpetable result?

[root@sweetpig-20 ~]# podman stats --no-reset --no-stream --format "table {{.CPUPerc}}"
CPU %
--

3. the CPU% can't show real value for --no-stream option when running container w/ --cpu-quota and --cpu-period.

# podman run --cpu-quota=10000 --cpu-period=100000 --rm quay.io/libpod/alpine sh -c "dd if=/dev/zero of=/dev/null"  

3.1 it's works w/o --no-stream option (about 10% for CPU%)

[root@sweetpig-20 ~]# podman stats --no-reset
ID            NAME            CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
bc52032a1f0a  clever_solomon  8.94%       270.3kB / 6.574GB  0.00%       698B / 1.454kB  -- / --     1           1.473022283s  8.94%
ID            NAME            CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
bc52032a1f0a  clever_solomon  10.18%      270.3kB / 6.574GB  0.00%       698B / 1.454kB  -- / --     1           1.982165126s  5.09%
ID            NAME            CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
bc52032a1f0a  clever_solomon  9.83%       270.3kB / 6.574GB  0.00%       698B / 1.454kB  -- / --     1           2.473707418s  6.67%
ID            NAME            CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %

3.2 it doesn't works w --no-stream option (about 1% for CPU%)

[root@sweetpig-20 ~]# podman stats --no-reset --no-stream
ID            NAME            CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
bc52032a1f0a  clever_solomon  1.27%       270.3kB / 6.574GB  0.00%       698B / 1.454kB  -- / --     1           2.093466614s  1.27%
[root@sweetpig-20 ~]# podman stats --no-reset --no-stream
ID            NAME            CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
bc52032a1f0a  clever_solomon  1.33%       270.3kB / 6.574GB  0.00%       698B / 1.454kB  -- / --     1           2.193105013s  1.33%
[root@sweetpig-20 ~]# podman stats --no-reset --no-stream
ID            NAME            CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
bc52032a1f0a  clever_solomon  1.39%       270.3kB / 6.574GB  0.00%       698B / 1.454kB  -- / --     1           2.293010695s  1

Comment 2 Paul Holzinger 2022-03-22 14:08:56 UTC
I can reproduce this on with the main branch on fedora with cgroupsv2.

I think there is something wrong with how we calculate this, even with streaming the first entry is always wrong. It only starts with the second update.
I take a deeper look at this.

Comment 3 Paul Holzinger 2022-03-22 16:46:38 UTC
I found the problem.
Upstream PR: https://github.com/containers/podman/pull/13597

Comment 7 Alex Jia 2022-05-13 16:20:36 UTC
This bug has beeen verified on podman-4.1.0-2.module+el8.7.0+15223+3987d347.

1. terminal window 0
[root@ibm-x3650m4-01-vm-06 ~]# podman run -it --rm --cpus 1.0 quay.io/libpod/alpine sh -c "dd if=/dev/zero of=/dev/null"

1.1 terminal window 1
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
751e176bf73d  jovial_blackwell  100.69%     679.9kB / 3.913GB  0.02%       542B / 1.166kB  811kB / 0B  1           4.142926274s  100.69%
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
751e176bf73d  jovial_blackwell  99.21%      679.9kB / 3.913GB  0.02%       542B / 1.166kB  811kB / 0B  1           9.107257266s  99.88%
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
751e176bf73d  jovial_blackwell  99.34%      679.9kB / 3.913GB  0.02%       612B / 1.236kB  811kB / 0B  1           14.077127056s  99.69%
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
751e176bf73d  jovial_blackwell  99.19%      679.9kB / 3.913GB  0.02%       612B / 1.236kB  811kB / 0B  1           19.048322905s  99.56%
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME       AVG CPU %
751e176bf73d  jovial_blackwell  99.45%      679.9kB / 3.913GB  0.02%       612B / 1.236kB  811kB / 0B  1           24.023252085s  99.54%
...

1.2 terminal window 2
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME        AVG CPU %
751e176bf73d  jovial_blackwell  99.37%      679.9kB / 3.913GB  0.02%       752B / 1.376kB  811kB / 0B  1           1m9.633900175s  99.37%
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME         AVG CPU %
751e176bf73d  jovial_blackwell  99.38%      679.9kB / 3.913GB  0.02%       752B / 1.376kB  811kB / 0B  1           1m11.037917522s  99.38%
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME        AVG CPU %
751e176bf73d  jovial_blackwell  99.37%      679.9kB / 3.913GB  0.02%       752B / 1.376kB  811kB / 0B  1           1m12.21020346s  99.37%
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME         AVG CPU %
751e176bf73d  jovial_blackwell  99.33%      679.9kB / 3.913GB  0.02%       752B / 1.376kB  811kB / 0B  1           1m13.357360578s  99.33%
...

2. terminal window 0
[root@ibm-x3650m4-01-vm-06 ~]# podman run --cpu-quota=10000 --cpu-period=100000 --rm quay.io/libpod/alpine sh -c "dd if=/dev/zero of=/dev/null"

2.1 terminal window 1
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
fd208775d6ae  eager_moser  11.56%      262.1kB / 3.913GB  0.01%       542B / 1.166kB  -- / --     1           444.519873ms  11.56%
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
fd208775d6ae  eager_moser  10.00%      262.1kB / 3.913GB  0.01%       542B / 1.166kB  -- / --     1           945.007185ms  10.68%
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
fd208775d6ae  eager_moser  9.99%       262.1kB / 3.913GB  0.01%       612B / 1.236kB  -- / --     1           1.444980016s  10.43%
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
fd208775d6ae  eager_moser  9.99%       262.1kB / 3.913GB  0.01%       612B / 1.236kB  -- / --     1           1.944570535s  10.31%
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
fd208775d6ae  eager_moser  9.99%       262.1kB / 3.913GB  0.01%       612B / 1.236kB  -- / --     1           2.444557334s  10.25%
...

2.2 terminal window 2
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
fd208775d6ae  eager_moser  10.35%      262.1kB / 3.913GB  0.01%       612B / 1.236kB  -- / --     1           1.744470989s  10.35%
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME     AVG CPU %
fd208775d6ae  eager_moser  10.31%      262.1kB / 3.913GB  0.01%       612B / 1.236kB  -- / --     1           1.87441375s  10.31%
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
fd208775d6ae  eager_moser  10.33%      262.1kB / 3.913GB  0.01%       612B / 1.236kB  -- / --     1           2.005152873s  10.33%
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME      AVG CPU %
fd208775d6ae  eager_moser  10.31%      262.1kB / 3.913GB  0.01%       612B / 1.236kB  -- / --     1           2.124787692s  10.31%
[root@ibm-x3650m4-01-vm-06 ~]# podman stats --no-reset --no-stream
ID            NAME         CPU %       MEM USAGE / LIMIT  MEM %       NET IO          BLOCK IO    PIDS        CPU TIME     AVG CPU %
fd208775d6ae  eager_moser  10.28%      262.1kB / 3.913GB  0.01%       612B / 1.236kB  -- / --     1           2.23966784s  10.28%
...

[root@ibm-x3650m4-01-vm-06 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.7 Beta (Ootpa)

[root@ibm-x3650m4-01-vm-06 ~]# rpm -q podman runc systemd kernel
podman-4.1.0-2.module+el8.7.0+15223+3987d347.x86_64
runc-1.0.3-6.module+el8.7.0+15223+3987d347.x86_64
systemd-239-58.el8.x86_64
kernel-4.18.0-387.el8.x86_64

Comment 9 errata-xmlrpc 2022-11-08 09:15:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7457


Note You need to log in before you can comment on or make changes to this bug.