RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1988252 - API: libpod/containers/<id>/stats for unprivileged mode unexpectedly succeeds and has wrong CPU and network numbers
Summary: API: libpod/containers/<id>/stats for unprivileged mode unexpectedly succeeds...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: beta
: ---
Assignee: Aditya R
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-30 07:18 UTC by Martin Pitt
Modified: 2023-09-18 00:29 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-01-30 07:27:44 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)

Description Martin Pitt 2021-07-30 07:18:20 UTC
Description of problem: The podman stack got updated recently in RHEL 8.5. This enabled the /stats API for unprivileged containers, possibly inadvertently.


Version-Release number of selected component (if applicable):

  conmon (2:2.0.27-1.module+el8.5.0+10387+8d85dbaf -> 2:2.0.29-1.module+el8.5.0+12014+438a5746)
  kernel (4.18.0-323.el8 -> 4.18.0-324.el8)
  podman (3.1.0-0.8.module+el8.5.0+10387+8d85dbaf -> 3.3.0-0.17.module+el8.5.0+12014+438a5746)
  runc (1.0.0-70.rc92.module+el8.5.0+10387+8d85dbaf -> 1.0.1-3.module+el8.5.0+12014+438a5746)

How reproducible: Always


Steps to Reproduce (as non-root):
1. Enable the API and start a container:

systemctl --user start podman.socket
podman run -it quay.io/libpod/busybox

2. Check usage stats on the CLI:

$ podman stats
Error: stats is not supported in rootless mode without cgroups v2
(this did not change)

3. Check usage stats on the API;

curl --unix-socket /run/user/1000/podman/podman.sock http://d/v1.24/libpod/containers/$(podman ps -q)/stats

Actual results:
With the previous versions this failed similarly to the CLI:

{"cause":"no support for CGroups V1 in rootless environments","message":"failed to obtain Container 283b58f272eb stats: unable to load cgroup at /user.slice/user-1000.slice/user/user.slice/podman-1565.scope/283b58f272ebd8ad68c647d6c50ac11ed871c30f17a0e61151c78c1d547d823a: no support for CGroups V1 in rootless environments","response":500}

with the current versions, this now starts to work, but reports a lot of wrong numbers:

{"read":"2021-07-30T02:59:55.76241298-04:00","preread":"2021-07-30T02:59:50.758203107-04:00","pids_stats":{"current":15},"blkio_stats":{"io_service_bytes_recursive":[],"io_serviced_recursive":null,"io_queue_recursive":null,"io_service_time_recursive":null,"io_wait_time_recursive":null,"io_merged_recursive":null,"io_time_recursive":null,"sectors_recursive":null},"num_procs":0,"storage_stats":{},"cpu_stats":{"cpu_usage":{"total_usage":0,"usage_in_kernelmode":0,"usage_in_usermode":0},"system_cpu_usage":8325461890,"cpu":0,"throttling_data":{"periods":0,"throttled_periods":0,"throttled_time":0}},"precpu_stats":{"cpu_usage":{"total_usage":0,"usage_in_kernelmode":0,"usage_in_usermode":0},"system_cpu_usage":8292281354,"cpu":0,"throttling_data":{"periods":0,"throttled_periods":0,"throttled_time":0}},"memory_stats":{"usage":48406528,"max_usage":9223372036854771712,"limit":9223372036854771712},"name":"festive_goldberg","Id":"7151de8987656745a6e937ce6b35d6cacd8200ae5e7c015b65f03e931138e9a0","networks":{"network":{"rx_bytes":0,"rx_packets":0,"rx_errors":0,"rx_dropped":0,"tx_bytes":0,"tx_packets":0,"tx_errors":0,"tx_dropped":0}}}


At least blkio_stats, num_procs, networks, and cpu_stats are always zero, and I suspect with cgroupsv1  there is not much chance of actually getting them. You can trigger some CPU usage with `dd if=/dev/urandom of=/dev/null` but the numbers stay at 0. Same with network traffic with `wget`.

Only memory_stats.usage seems correct , when I do `MEMBLOB=$(yes | dd bs=1M count=200 iflag=fullblock)` the usage does go up by ~ 300 MB.


Expected results: If stats reports numbers, they should be correct. This currently causes invalid numbers to be displayed in cockpit-podman.


Additional info:

Comment 1 Martin Pitt 2021-07-30 07:29:53 UTC
> {"usage":48406528

I also doubt that, TBH. In a fresh boot of my test VM, this starts out as "usage":153837568, i.e. 153 MB. The sole process in that busybox is "sh", and on the host it shows 640 kB residential and 1.3 MB virtual memory:

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
admin       1461  0.0  0.0   1328   644 pts/0    Ss+  03:26   0:00 sh

so this is much too high. The total used memory in that VM (according to free -h) is 226 MB.

Comment 2 Matthew Heon 2021-07-30 14:03:44 UTC
I think this is expected to work, but only for cgroupsv2 systems. We likely messed up and enabled it for v1 at the same time. The code we use for determining what cgroup to read stats from is sufficiently robust that it is determining the container's cgroup, but since you're on a v1 system the container does not have a dedicated cgroup, so I imagine we're getting the stats for the user's overall cgroup and reporting them as the stats for the individual container. Easiest fix is to disable the endpoint if we detect we're on v1 - throwing a 500 seems appropriate, since we can't give sane responses.

Comment 3 Martin Pitt 2021-07-30 14:12:14 UTC
> throwing a 500 seems appropriate

Matthew: Agreed - as I mentioned in the description, that's what the previous version did, and the CLI still does, so that would fix things. I don't think RHEL 8.Y will ever switch to cgroupvsv2 by default (or even support them)?

Comment 4 Tom Sweeney 2021-07-30 15:44:55 UTC
Jhon, can you take a peak at this please?

Comment 5 Matej Marušák 2022-01-19 10:06:53 UTC
In cockpit-podman we switched to `libpod/containers/stats` endpoint so we don't see this happening for us anymore. But I just checked with current rhel-8-5 image and it is still affected.

Comment 6 Jindrich Novy 2022-05-10 08:18:38 UTC
Can you please check again now Matej?

Comment 7 Matej Marušák 2022-05-11 06:27:02 UTC
It is still happening. Has there been any work done on this?

kernel-4.18.0-348.el8.x86_64
podman-3.3.1-9.module+el8.5.0+12697+018f24d7.x86_64
runc-1.0.2-1.module+el8.5.0+12582+56d94c81.x86_64

Comment 8 Tom Sweeney 2022-05-11 12:32:55 UTC
@jhonce any thoughts?

Comment 12 RHEL Program Management 2023-01-30 07:27:44 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 13 Red Hat Bugzilla 2023-09-18 00:29:28 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.