Bug 2213267
| Summary: | filesystems mount and expire immediately | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Frank Sorenson <fsorenso> | ||||
| Component: | autofs | Assignee: | Ian Kent <ikent> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Kun Wang <kunwan> | ||||
| Severity: | unspecified | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 8.7 | CC: | dostwal, dwysocha, pdwyer, xzhou | ||||
| Target Milestone: | rc | Keywords: | Triaged | ||||
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | autofs-5.1.4-109.el8 | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | |||||||
| : | 2223252 2223506 (view as bug list) | Environment: | |||||
| Last Closed: | 2023-11-14 15:48:36 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 2223252, 2223506 | ||||||
| Attachments: |
|
||||||
|
Description
Frank Sorenson
2023-06-07 16:56:49 UTC
It looks like all or most of the autofs managed mounts have a timeout of 3600: /etc/autofs/auto_u on /u type autofs (rw,relatime,fd=180,pgrp=9853,timeout=3600,minproto=5,maxproto=5,indirect,pipe_ino=77207) I can't find anywhere that sets that timeout, how is it set? It looks like the timeout is set to an hour in /etc/sysconfig/autofs:
TIMEOUT=3600
We'll get debug-level logging from the customer.
(In reply to Frank Sorenson from comment #6) > It looks like the timeout is set to an hour in /etc/sysconfig/autofs: > > TIMEOUT=3600 Oh, yes, thought I looked there ... > > We'll get debug-level logging from the customer. Thanks. Okay, so the immediate expire appears to be because the customer is sending a USR1 to automount immediately after triggering the mount. What a difference having all the information makes... <sigh>
In this case, sending USR1 does appear to cause a problem where automount gets into a loop constantly attempting to expire, but failing to unmount (-EBUSY). This expire loop only occurs after getting USR1 when a mount is accessed (and still in-use) from a mount namespace with propagation=slave
/etc/auto.master:
/home1 /etc/auto.home1
/etc/auto.home1:
user1 -rw server:/homes/user1
# systemctl restart autofs.service
trigger the mount and keep it busy inside a mount namespace with propagation=slave:
# unshare -m --propagation=slave /bin/bash -c "cd /home1/user1 ; sleep 999"
After each timeout period, autofs will attempt to unmount, failing with EBUSY:
Jun 17 14:15:02 vm23 automount[1770128]: st_expire: state 1 path /home1
Jun 17 14:15:02 vm23 automount[1770128]: expire_proc: exp_proc = 140602511918848 path /home1
Jun 17 14:15:02 vm23 automount[1770128]: expire_proc_indirect: expire /home1/user1
Jun 17 14:15:02 vm23 automount[1770128]: handle_packet: type = 4
Jun 17 14:15:02 vm23 automount[1770128]: handle_packet_expire_indirect: token 51582, name user1
Jun 17 14:15:02 vm23 automount[1770128]: expiring path /home1/user1
Jun 17 14:15:02 vm23 automount[1770128]: umount_multi: path /home1/user1 incl 1
Jun 17 14:15:02 vm23 automount[1770128]: umount_subtree_mounts: unmounting dir = /home1/user1
Jun 17 14:15:02 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:15:02 vm23 automount[1770128]: spawn_umount: umount failed with error code 16, retrying with the -f option
Jun 17 14:15:02 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:15:02 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:15:02 vm23 automount[1770128]: Unable to update the mtab file, /proc/mounts and /etc/mtab will differ
Jun 17 14:15:02 vm23 automount[1770128]: could not umount dir /home1/user1
Jun 17 14:15:02 vm23 automount[1770128]: couldn't complete expire of /home1/user1
Jun 17 14:15:02 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51582
Jun 17 14:15:02 vm23 automount[1770128]: expire_proc_indirect: 1 remaining in /home1
Jun 17 14:15:02 vm23 automount[1770128]: expire_cleanup: got thid 140602511918848 path /home1 stat 1
Jun 17 14:15:02 vm23 automount[1770128]: expire_cleanup: sigchld: exp 140602511918848 finished, switching from 2 to 1
Jun 17 14:15:02 vm23 automount[1770128]: st_ready: st_ready(): state = 2 path /home1
I presume that the mount is seen as idle because the automount process is running in the default mount namespace, and the mount is kept busy by a process in a separate mount namespace with propagation=slave, rather than shared. As a result, the mount is seen as both idle and in-use.
The real kicker occurs when automount gets USR1;
# pkill -USR1 automount
autofs gets into a loop, trying to expire & unmount repeatedly:
Jun 17 14:39:59 vm23 automount[1770128]: do_notify_state: signal 10
Jun 17 14:39:59 vm23 automount[1770128]: master_notify_state_change: sig 10 switching /home1 from 1 to 3
Jun 17 14:39:59 vm23 automount[1770128]: st_prune: state 1 path /home1
Jun 17 14:39:59 vm23 automount[1770128]: expire_proc: exp_proc = 140602511918848 path /home1
Jun 17 14:39:59 vm23 automount[1770128]: expire_proc_indirect: expire /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: handle_packet: type = 4
Jun 17 14:39:59 vm23 automount[1770128]: handle_packet_expire_indirect: token 51593, name user1
Jun 17 14:39:59 vm23 automount[1770128]: expiring path /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: umount_multi: path /home1/user1 incl 1
Jun 17 14:39:59 vm23 automount[1770128]: umount_subtree_mounts: unmounting dir = /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: spawn_umount: umount failed with error code 16, retrying with the -f option
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: Unable to update the mtab file, /proc/mounts and /etc/mtab will differ
Jun 17 14:39:59 vm23 automount[1770128]: could not umount dir /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: couldn't complete expire of /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51593
Jun 17 14:39:59 vm23 automount[1770128]: handle_packet: type = 4
Jun 17 14:39:59 vm23 automount[1770128]: handle_packet_expire_indirect: token 51594, name user1
Jun 17 14:39:59 vm23 automount[1770128]: expiring path /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: umount_multi: path /home1/user1 incl 1
Jun 17 14:39:59 vm23 automount[1770128]: umount_subtree_mounts: unmounting dir = /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: spawn_umount: umount failed with error code 16, retrying with the -f option
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: Unable to update the mtab file, /proc/mounts and /etc/mtab will differ
Jun 17 14:39:59 vm23 automount[1770128]: could not umount dir /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: couldn't complete expire of /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51594
Jun 17 14:39:59 vm23 automount[1770128]: handle_packet: type = 4
Jun 17 14:39:59 vm23 automount[1770128]: handle_packet_expire_indirect: token 51595, name user1
Jun 17 14:39:59 vm23 automount[1770128]: expiring path /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: umount_multi: path /home1/user1 incl 1
Jun 17 14:39:59 vm23 automount[1770128]: umount_subtree_mounts: unmounting dir = /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: spawn_umount: umount failed with error code 16, retrying with the -f option
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: Unable to update the mtab file, /proc/mounts and /etc/mtab will differ
Jun 17 14:39:59 vm23 automount[1770128]: could not umount dir /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: couldn't complete expire of /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51595
Jun 17 14:39:59 vm23 automount[1770128]: handle_packet: type = 4
Jun 17 14:39:59 vm23 automount[1770128]: handle_packet_expire_indirect: token 51596, name user1
Jun 17 14:39:59 vm23 automount[1770128]: expiring path /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: umount_multi: path /home1/user1 incl 1
Jun 17 14:39:59 vm23 automount[1770128]: umount_subtree_mounts: unmounting dir = /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: spawn_umount: umount failed with error code 16, retrying with the -f option
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: device is busy
Jun 17 14:39:59 vm23 automount[1770128]: Unable to update the mtab file, /proc/mounts and /etc/mtab will differ
Jun 17 14:39:59 vm23 automount[1770128]: could not umount dir /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: couldn't complete expire of /home1/user1
Jun 17 14:39:59 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51596
...
repeating until the filesystem can actually be unmounted
So I suppose there may be two issues (though neither is the immediate expire for which this BZ was opened):
1) the kernel doesn't detect that the mount is still in-use in a mount namespace other than the one in which automount runs
2) after getting SIGUSR1, automount enters a loop where it repeatedly tries to expire & unmount a busy filesystem (should it try to unmount just once?)
(In reply to Frank Sorenson from comment #8) > Okay, so the immediate expire appears to be because the customer is sending > a USR1 to automount immediately after triggering the mount. What a > difference having all the information makes... <sigh> > > In this case, sending USR1 does appear to cause a problem where automount > gets into a loop constantly attempting to expire, but failing to unmount > (-EBUSY). This expire loop only occurs after getting USR1 when a mount is > accessed (and still in-use) from a mount namespace with propagation=slave > > > /etc/auto.master: > > /home1 /etc/auto.home1 > > > /etc/auto.home1: > user1 -rw server:/homes/user1 > > > # systemctl restart autofs.service > > trigger the mount and keep it busy inside a mount namespace with > propagation=slave: > > # unshare -m --propagation=slave /bin/bash -c "cd /home1/user1 ; sleep > 999" > > After each timeout period, autofs will attempt to unmount, failing with > EBUSY: > > Jun 17 14:15:02 vm23 automount[1770128]: st_expire: state 1 path /home1 > Jun 17 14:15:02 vm23 automount[1770128]: expire_proc: exp_proc = > 140602511918848 path /home1 > Jun 17 14:15:02 vm23 automount[1770128]: expire_proc_indirect: expire > /home1/user1 > Jun 17 14:15:02 vm23 automount[1770128]: handle_packet: type = 4 > Jun 17 14:15:02 vm23 automount[1770128]: handle_packet_expire_indirect: > token 51582, name user1 > Jun 17 14:15:02 vm23 automount[1770128]: expiring path /home1/user1 > Jun 17 14:15:02 vm23 automount[1770128]: umount_multi: path /home1/user1 > incl 1 > Jun 17 14:15:02 vm23 automount[1770128]: umount_subtree_mounts: unmounting > dir = /home1/user1 > Jun 17 14:15:02 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:15:02 vm23 automount[1770128]: spawn_umount: umount failed with > error code 16, retrying with the -f option > Jun 17 14:15:02 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:15:02 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:15:02 vm23 automount[1770128]: Unable to update the mtab file, > /proc/mounts and /etc/mtab will differ > Jun 17 14:15:02 vm23 automount[1770128]: could not umount dir /home1/user1 > Jun 17 14:15:02 vm23 automount[1770128]: couldn't complete expire of > /home1/user1 > Jun 17 14:15:02 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51582 > Jun 17 14:15:02 vm23 automount[1770128]: expire_proc_indirect: 1 remaining > in /home1 > Jun 17 14:15:02 vm23 automount[1770128]: expire_cleanup: got thid > 140602511918848 path /home1 stat 1 > Jun 17 14:15:02 vm23 automount[1770128]: expire_cleanup: sigchld: exp > 140602511918848 finished, switching from 2 to 1 > Jun 17 14:15:02 vm23 automount[1770128]: st_ready: st_ready(): state = 2 > path /home1 > > I presume that the mount is seen as idle because the automount process is > running in the default mount namespace, and the mount is kept busy by a > process in a separate mount namespace with propagation=slave, rather than > shared. As a result, the mount is seen as both idle and in-use. You presumption is accurate except that we never want the propagation to be shared, specifically we don't want mounts to propagate from the mount namespace back to the init (or root) mount namespace, that ends badly. I'm pretty sure this is because the kernel function may_umount_tree(), used by the autofs expire, is not able to check usage of propagated mounts. So if the mount isn't in use in the root namespace it will be seen as not busy and will be selected for expire. The problem with fixing this is, a brute force traversal of "all" of the mount namespace trees isn't acceptable for inclusion upstream and that has been rejected some time ago now. However, I have been working on this recently (stopped due to other demands) and I have code I believe is acceptable upstream and appears to function correctly but I have a suspicion it doesn't quite work exactly as we need so more testing needs to be done before I propose it upstream. > > The real kicker occurs when automount gets USR1; > > # pkill -USR1 automount > > autofs gets into a loop, trying to expire & unmount repeatedly: > > Jun 17 14:39:59 vm23 automount[1770128]: do_notify_state: signal 10 > Jun 17 14:39:59 vm23 automount[1770128]: master_notify_state_change: sig 10 > switching /home1 from 1 to 3 > Jun 17 14:39:59 vm23 automount[1770128]: st_prune: state 1 path /home1 > Jun 17 14:39:59 vm23 automount[1770128]: expire_proc: exp_proc = > 140602511918848 path /home1 > Jun 17 14:39:59 vm23 automount[1770128]: expire_proc_indirect: expire > /home1/user1 > > Jun 17 14:39:59 vm23 automount[1770128]: handle_packet: type = 4 > Jun 17 14:39:59 vm23 automount[1770128]: handle_packet_expire_indirect: > token 51593, name user1 > Jun 17 14:39:59 vm23 automount[1770128]: expiring path /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: umount_multi: path /home1/user1 > incl 1 > Jun 17 14:39:59 vm23 automount[1770128]: umount_subtree_mounts: unmounting > dir = /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: spawn_umount: umount failed with > error code 16, retrying with the -f option > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: Unable to update the mtab file, > /proc/mounts and /etc/mtab will differ > Jun 17 14:39:59 vm23 automount[1770128]: could not umount dir /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: couldn't complete expire of > /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51593 > > Jun 17 14:39:59 vm23 automount[1770128]: handle_packet: type = 4 > Jun 17 14:39:59 vm23 automount[1770128]: handle_packet_expire_indirect: > token 51594, name user1 > Jun 17 14:39:59 vm23 automount[1770128]: expiring path /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: umount_multi: path /home1/user1 > incl 1 > Jun 17 14:39:59 vm23 automount[1770128]: umount_subtree_mounts: unmounting > dir = /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: spawn_umount: umount failed with > error code 16, retrying with the -f option > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: Unable to update the mtab file, > /proc/mounts and /etc/mtab will differ > Jun 17 14:39:59 vm23 automount[1770128]: could not umount dir /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: couldn't complete expire of > /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51594 > > Jun 17 14:39:59 vm23 automount[1770128]: handle_packet: type = 4 > Jun 17 14:39:59 vm23 automount[1770128]: handle_packet_expire_indirect: > token 51595, name user1 > Jun 17 14:39:59 vm23 automount[1770128]: expiring path /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: umount_multi: path /home1/user1 > incl 1 > Jun 17 14:39:59 vm23 automount[1770128]: umount_subtree_mounts: unmounting > dir = /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: spawn_umount: umount failed with > error code 16, retrying with the -f option > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: Unable to update the mtab file, > /proc/mounts and /etc/mtab will differ > Jun 17 14:39:59 vm23 automount[1770128]: could not umount dir /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: couldn't complete expire of > /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51595 > > Jun 17 14:39:59 vm23 automount[1770128]: handle_packet: type = 4 > Jun 17 14:39:59 vm23 automount[1770128]: handle_packet_expire_indirect: > token 51596, name user1 > Jun 17 14:39:59 vm23 automount[1770128]: expiring path /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: umount_multi: path /home1/user1 > incl 1 > Jun 17 14:39:59 vm23 automount[1770128]: umount_subtree_mounts: unmounting > dir = /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: spawn_umount: umount failed with > error code 16, retrying with the -f option > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: >> umount.nfs4: /home1/user1: > device is busy > Jun 17 14:39:59 vm23 automount[1770128]: Unable to update the mtab file, > /proc/mounts and /etc/mtab will differ > Jun 17 14:39:59 vm23 automount[1770128]: could not umount dir /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: couldn't complete expire of > /home1/user1 > Jun 17 14:39:59 vm23 automount[1770128]: dev_ioctl_send_fail: token = 51596 > ... > > repeating until the filesystem can actually be unmounted This is unexpected, I'll need to reproduce it and work out what's going on. > > > So I suppose there may be two issues (though neither is the immediate expire > for which this BZ was opened): > > 1) the kernel doesn't detect that the mount is still in-use in a mount > namespace other than the one in which automount runs Correct, but more, the kernel doesn't know how to check them at all so it can't check the last used time stamp either. > > 2) after getting SIGUSR1, automount enters a loop where it repeatedly tries > to expire & unmount a busy filesystem (should it try to unmount just once?) Yes, but it is expected that if the mount remains unused for a further timeout it will try and umount it again ... unfortunately ... Ian (In reply to Ian Kent from comment #9) > > > > > 2) after getting SIGUSR1, automount enters a loop where it repeatedly tries > > to expire & unmount a busy filesystem (should it try to unmount just once?) > > Yes, but it is expected that if the mount remains unused for a further > timeout > it will try and umount it again ... unfortunately ... Ok, I'm very much tempted to say lets just fix the kernel expire namespace check problem. It has needed to be fixed for ages and I have put quite a bit of effort in to it over time which has got us something that's close and we have a customer that needs it to so it's worth spending a bit more time on it and trying to get it merged upstream. Thing is once the expire check is fixed automount behaves as it should because the mount doesn't get selected for expire. That looping is due to an optimisation that was done a while back and it was done with the assumption that the kernel expire check functions properly so strictly speaking its not a bug. This behaviour might also occur during a forced shutdown (sig USR2) but in this case mounts should always be umounted, either as normal or lazy umounted so that would actually be a different problem. If we have trouble getting this change accepted upstream we could add a workaround (which I also have tested) while we wait for me to do whatever is needed for the change upstream. I have to say, there is one patch which was sent to me by Al as a basis for what I needed to do. It does make it hugely simpler but it is a fundamental change to the mount point reference counting so it's cause to pause and consider the implications. OTOH I have used it a lot during testing without any side effects so maybe I'm just being paranoid. Ian Created attachment 1973843 [details]
Patch - fix expire retry looping
I think I'll go with this, the expire improvement needs to go
upstream but it's likely to take a while.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (autofs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7098 |