This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2089186 - restoring container fails once with cgroupv1, then succeeds
Summary: restoring container fails once with cgroupv1, then succeeds
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: criu
Version: 9.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Adrian Reber
QA Contact: Chao Ye
URL:
Whiteboard: CockpitTest
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-23 08:02 UTC by Marius Vollmer
Modified: 2023-09-25 14:48 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-25 14:48:22 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
restore.log of first, failed attempt (5.82 KB, text/plain)
2022-05-23 08:02 UTC, Marius Vollmer
no flags Details
restore.log of second, successful attempt (65.45 KB, text/plain)
2022-05-23 08:03 UTC, Marius Vollmer
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-8498 0 None Migrated None 2023-09-25 14:46:23 UTC
Red Hat Issue Tracker RHELPLAN-122739 0 None None None 2022-05-23 08:20:48 UTC

Description Marius Vollmer 2022-05-23 08:02:27 UTC
Created attachment 1882192 [details]
restore.log of first, failed attempt

Description of problem:

Restoring a checkpointed container fails with cgroupsv1, but then succeeds on the second try.

Version-Release number of selected component (if applicable):
podman-4.0.3-1.el9.x86_64
criu-3.15-13.el9.x86_64
crun-1.4.5-2.el9.x86_64
runc-1.0.3-4.el9.x86_64

How reproducible:
Always

Steps to Reproduce:
1. grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
2. reboot
3. podman run -dit --name swamped-crate busybox:latest sh
114ca3e855150cdb2595341011baaf906d83dfbb2c9d3bea5fb0098a799ef40c
4. podman container checkpoint swamped-crate
114ca3e855150cdb2595341011baaf906d83dfbb2c9d3bea5fb0098a799ef40c
5. podman container restore swamped-crate
Error: OCI runtime error: crun: CRIU restoring failed -52.  Please check CRIU logfile /var/lib/containers/storage/overlay-containers/114ca3e855150cdb2595341011baaf906d83dfbb2c9d3bea5fb0098a799ef40c/userdata/restore.log
6. podman container restore swamped-crate
114ca3e855150cdb2595341011baaf906d83dfbb2c9d3bea5fb0098a799ef40c

Actual results:

"podman container restore" fails on the first try, then succeeds.

Expected results:

It should succeed on the first try.

Additional info:

The ultimate error is this:

(00.004812) cg: Restoring cgroup property value [9223372036854771712] to [memory/machine.slice/libpod-10150e8cd8438787ec0462945a871045d06c24574c81e1e4b7f1db70d69ac4a2.scope/memory.kmem.limit_in_bytes]
(00.004827) Error (criu/cgroup.c:1360): cg: Failed writing 9223372036854771712 to memory/machine.slice/libpod-10150e8cd8438787ec0462945a871045d06c24574c81e1e4b7f1db70d69ac4a2.scope/memory.kmem.limit_in_bytes: Operation not supported
(00.004833) Error (criu/cgroup.c:1629): cg: Restoring memory.kmem.limit_in_bytes special property failed
(00.004835) Error (criu/cgroup.c:1690): cg: Restoring special cpuset props failed!

Comment 1 Marius Vollmer 2022-05-23 08:03:22 UTC
Created attachment 1882193 [details]
restore.log of second, successful attempt

Comment 2 Marius Vollmer 2022-05-23 08:06:23 UTC
Also, after the failed restore:

# cat /sys/fs/cgroup/memory/machine.slice/libpod-114ca3e855150cdb2595341011baaf906d83dfbb2c9d3bea5fb0098a799ef40c.scope/memory.kmem.limit_in_bytes 
9223372036854771712

So the file has already the value that podman tries to write into it during restore.

Comment 4 Daniel Walsh 2022-05-24 03:54:23 UTC
Giuseppe any ideas?

Comment 5 Charlie Doern 2022-05-24 14:15:13 UTC
seems that on first try, the ctr is trying to set memory limits which isn't allowed but then somehow knows not to set them on the second run. restore probably modifies something on the first errored run in the config/spec that should be set from the getgo

Comment 6 Giuseppe Scrivano 2022-08-24 10:37:57 UTC
after investigation, the error is coming from libcriu.  Re-assigning for further triage.

Comment 7 RHEL Program Management 2023-09-25 14:43:19 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 8 RHEL Program Management 2023-09-25 14:48:22 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.