Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Before debugging this further I'd like to know the following. Is the system where you observe unexpected (thawing) state running with cgroupv2 enabled? What does your kernel command line look like?
I started to investigate what is going on and found out that we have a bug in recently added freezer support when running on cgroup v1. I've already sent upstream PR to fix the issue.
Issue is mostly cosmetic, howeve, "FreezerState" property is visible in systemctl output when it is not set to default value. Due to this bug, restart or reload actions will set it to "thawing". That may cause unnecessary confusion for users and hence potentially increase case volume. We should fix the issue in 0day update.
(In reply to Gustavo Luiz Duarte from comment #19)
> Michal, could you please provide a build with the fix so that we can verify
> it in our environment?
Here are test rpms (btw, this is a systemd build that will be released as 8.3 0day update).
https://msekleta.fedorapeople.org/freezer-thawing-fix/
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: systemd security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2021:1611
Description of problem: I'm not sure how I got these services into a "thaw" state, but when trying to freeze and thaw again, it appears that it's not even supported anyways. [root@host-083 ~]# systemctl status sanlock ◠sanlock.service - Shared Storage Lease Manager Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor preset: disabled) Active: active (running) (thawing) since Thu 2020-08-13 16:58:24 CDT; 44min ago Process: 26236 ExecStart=/usr/sbin/sanlock daemon (code=exited, status=0/SUCCESS) Main PID: 26237 (sanlock) Tasks: 6 (limit: 93971) Memory: 14.2M CGroup: /system.slice/sanlock.service ├─26237 /usr/sbin/sanlock daemon └─26238 /usr/sbin/sanlock daemon Aug 13 16:58:24 host-083.virt.lab.msp.redhat.com systemd[1]: Starting Shared Storage Lease Manager... Aug 13 16:58:24 host-083.virt.lab.msp.redhat.com systemd[1]: Started Shared Storage Lease Manager. Aug 13 16:58:24 host-083.virt.lab.msp.redhat.com sanlock[26237]: 2020-08-13 16:58:24 12371 [26237]: set scheduler RR|RESET_ON_FORK priority 99 failed: Operation not permitted [root@host-083 ~]# systemctl status lvmlockd ◠lvmlockd.service - LVM lock daemon Loaded: loaded (/usr/lib/systemd/system/lvmlockd.service; disabled; vendor preset: disabled) Active: active (running) (thawing) since Thu 2020-08-13 17:14:36 CDT; 28min ago Docs: man:lvmlockd(8) Main PID: 26874 (lvmlockd) Tasks: 3 (limit: 93971) Memory: 2.7M CGroup: /system.slice/lvmlockd.service └─26874 /usr/sbin/lvmlockd --foreground Aug 13 17:14:36 host-083.virt.lab.msp.redhat.com systemd[1]: Starting LVM lock daemon... Aug 13 17:14:36 host-083.virt.lab.msp.redhat.com lvmlockd[26874]: [D] creating /run/lvm/lvmlockd.socket Aug 13 17:14:36 host-083.virt.lab.msp.redhat.com lvmlockd[26874]: 1597356876 lvmlockd started Aug 13 17:14:36 host-083.virt.lab.msp.redhat.com systemd[1]: Started LVM lock daemon. [root@host-083 ~]# systemctl freeze lvmlockd Failed to freeze unit lvmlockd.service: Unit 'lvmlockd.service' does not support freezing. [root@host-083 ~]# systemctl thaw lvmlockd Failed to thaw unit lvmlockd.service: Unit 'lvmlockd.service' does not support freezing. Version-Release number of selected component (if applicable): kernel-4.18.0-232.el8 BUILT: Mon Aug 10 02:17:54 CDT 2020 lvm2-2.03.09-5.el8 BUILT: Wed Aug 12 15:51:50 CDT 2020 lvm2-libs-2.03.09-5.el8 BUILT: Wed Aug 12 15:51:50 CDT 2020 lvm2-dbusd-2.03.09-5.el8 BUILT: Wed Aug 12 15:49:44 CDT 2020 lvm2-lockd-2.03.09-5.el8 BUILT: Wed Aug 12 15:51:50 CDT 2020 device-mapper-1.02.171-5.el8 BUILT: Wed Aug 12 15:51:50 CDT 2020 device-mapper-libs-1.02.171-5.el8 BUILT: Wed Aug 12 15:51:50 CDT 2020 device-mapper-event-1.02.171-5.el8 BUILT: Wed Aug 12 15:51:50 CDT 2020 device-mapper-event-libs-1.02.171-5.el8 BUILT: Wed Aug 12 15:51:50 CDT 2020 sanlock-3.8.1-1.el8 BUILT: Thu Jul 9 14:02:05 CDT 2020 sanlock-lib-3.8.1-1.el8 BUILT: Thu Jul 9 14:02:05 CDT 2020