Bug 1830738
| Summary: | Sanlock fails to setup real time priority when running as service | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Nir Soffer <nsoffer> |
| Component: | sanlock | Assignee: | David Teigland <teigland> |
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.2 | CC: | agk, cluster-maint, cmarthal |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | 8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | sanlock-3.8.2-1.el8 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-11-04 02:14:39 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This is probably not related to timeouts in oVirt system tests mention in bug 1247135. We found that the issue is bug 1832967. Setting the scheduler has simply been removed, so this is no longer a problem. It looks like the research we did on this issue several months ago never made it into bz, or there was some other bz it went into that I can't find. The conclusion was that setting the scheduler has not worked for many years, and since we didn't notice any problems without it, the setting was unnecessary. Based on Nir's research, and talking with scheduler developers, we found that it's not really feasible to enable this even if we wanted to. Fix verified in the rpms.
# old rpms
sanlock-3.8.1-1.el8 BUILT: Thu Jul 9 14:02:05 CDT 2020
sanlock-lib-3.8.1-1.el8 BUILT: Thu Jul 9 14:02:05 CDT 2020
Aug 13 13:38:52 host-073.virt.lab.msp.redhat.com systemd[1]: Starting Shared Storage Lease Manager...
Aug 13 13:38:53 host-073.virt.lab.msp.redhat.com systemd[1]: Started Shared Storage Lease Manager.
Aug 13 13:38:53 host-073.virt.lab.msp.redhat.com sanlock[4106]: 2020-08-13 13:38:53 405 [4106]: set scheduler RR|RESET_ON_FORK priority 99 failed: Operation not permitted
# new rpms
sanlock-3.8.2-1.el8 BUILT: Mon Aug 10 12:12:49 CDT 2020
sanlock-lib-3.8.2-1.el8 BUILT: Mon Aug 10 12:12:49 CDT 2020
[root@host-083 ~]# systemctl status sanlock
â— sanlock.service - Shared Storage Lease Manager
Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2020-08-13 18:20:44 CDT; 1min 48s ago
Process: 3337 ExecStart=/usr/sbin/sanlock daemon (code=exited, status=0/SUCCESS)
Main PID: 3341 (sanlock)
Tasks: 6 (limit: 93971)
Memory: 14.7M
CGroup: /system.slice/sanlock.service
├─3341 /usr/sbin/sanlock daemon
└─3342 /usr/sbin/sanlock daemon
Aug 13 18:20:44 host-083.virt.lab.msp.redhat.com systemd[1]: Starting Shared Storage Lease Manager...
Aug 13 18:20:44 host-083.virt.lab.msp.redhat.com systemd[1]: Started Shared Storage Lease Manager.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (sanlock bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4595 |
Description of problem: During startup sanlock tries to use real time priority, but this always fails: 2020-04-30 14:40:49 7 [903]: sanlock daemon started 3.8.0 host 68375b9c-ef41-4214-ab83-a170c2e07ca3.host4 2020-04-30 14:40:49 7 [903]: set scheduler RR|RESET_ON_FORK priority 99 failed: Operation not permitted So sanlock runs with normal priority. This may cause delays and failures to renew leases. Version-Release number of selected component (if applicable): Tested with 3.8.0, but looking at git history this seems to a very old issue. How reproducible: Always Steps to Reproduce: 1. Start the service Actual results: # ps -o cmd,cls,rtprio -p 903 CMD CLS RTPRIO /usr/sbin/sanlock daemon TS - Expected results: $ ps -o cmd,cls,rtprio -p 2275 CMD CLS RTPRIO /usr/sbin/sanlock daemon RR 99 This may be related to I/O timeouts we experience in oVirt system tests. See bug 1247135 Fix posted upstream: https://lists.fedorahosted.org/archives/list/sanlock-devel@lists.fedorahosted.org/message/J64ENSHB6Q3AST4H3PDJARDNYLSTLWAF/