Bug 1753901
Summary: | ioprocess - Implement block size detection compatible with Gluster storage | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Sahina Bose <sabose> |
Component: | ioprocess | Assignee: | Nir Soffer <nsoffer> |
Status: | CLOSED ERRATA | QA Contact: | bipin <bshetty> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | 4.3.6 | CC: | aefrat, kdhananj, nsoffer, rdlugyhe, sasundar, tnisan, vjuranek |
Target Milestone: | ovirt-4.3.6 | Keywords: | ZStream |
Target Release: | 4.3.6 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ioprocess-1.3.0 | Doc Type: | Enhancement |
Doc Text: |
The current release provides an API to probe the block size of the underlying filesystem. The vdsm package needs this API to support 4k storage on gluster.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-10-10 15:39:40 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1751722, 1753898 | ||
Attachments: |
Description
Sahina Bose
2019-09-20 08:06:01 UTC
According to https://bugzilla.redhat.com/show_bug.cgi?id=1751722#c13 the issue is caused by multiple hosts trying to detect block size using the same file (__DIRECT_IO_TEST__). This is expected to work on posix filesystem but it breaks gluster metadata management. Block size detection is implemented now in vdsm, so ioprocess does not need any change. However implementing block size detection in a way which is compatible with existing Gluster is best done in iprocess, so I'm using this bug for the fix in ioprocess. Created attachment 1617335 [details]
stress test for verifying with gluster storage
To verify:
- Install python2-ioprocess on all hosts
- Mount same gluster volume on all hosts
- Run attached test script on all hosts for couple of minutes
$ python probe_block_size_stress.py /mount/path
512
512
...
Expected results:
- Script output the same block size on all hosts
- No errors in the scripts output
- No leftover hidden files in the gluster mountpoint
(.e.g ".probe-xxx-yyy")
- Gluster fuse mount does not crash
Created attachment 1617906 [details]
stress test for verifying with gluster storage - improved
This version flushes output so it is easy to run with nohup.
Created attachment 1617907 [details]
yum repo for testing ioprocess 1.3.0-1 build
I'm testing now on Avihai setup - 3 hosts connected to gluster storage. On each host: 1. Copy ioproess.repo (attachment 1617907 [details]) to /etc/yum.repos.d/ 2. yum upgrade ioprocess python2-ioprocess 3. Copy probe_block_size_stress.py (attachment 1617906 [details]) to /root/ 4. Run nohup python probe_block_size_stress.py \ /rhev/data-center/mnt/glusterSD/gluster01.exmpple.com:_storage__local__ge8__volume__0/ & Started around 20:05, probing in tight loop - about 2 seconds per probe, 1800 calls per hour. (this systems are extremly slow). All hosts report correct block size. In strace we see: $ strace -f -tt -T -p 26494 [pid 26500] 20:21:13.893810 open("/rhev/data-center/mnt/glusterSD/gluster01.example.com:_storage__local__ge8__volume__0//.prob-f260033a-694c-4bdf-bcef-4b46f82306b9", O_WRONLY|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0600 <unfinished ...> [pid 26500] 20:21:13.907069 <... open resumed> ) = 0 <0.013147> [pid 26500] 20:21:13.907195 unlink("/rhev/data-center/mnt/glusterSD/gluster01.example.com:_storage__local__ge8__volume__0//.prob-f260033a-694c-4bdf-bcef-4b46f82306b9") = 0 <0.005863> [pid 26500] 20:21:13.913227 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.930909> [pid 26500] 20:21:15.844328 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.011338> Krutika, does it make sense that writing 1 byte takes 1.93 seconds? Will run for couple of hours. Created attachment 1617908 [details]
ioprocess trace showing extremely slow unaligned writes
Krutika, should we file a bug about the slow unaligned writes?
# grep pwrite64 ioprocess.trace | head -20
26500 20:27:40.037730 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.009804>
26500 20:27:40.133382 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.911269>
26500 20:27:42.044969 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.012527>
26500 20:27:42.105923 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.940483>
26500 20:27:44.046644 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.028278>
26500 20:27:44.162981 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.883644>
26500 20:27:46.046896 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.012689>
26500 20:27:46.135306 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.912911>
26500 20:27:48.048518 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.011936>
26500 20:27:48.105805 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.952041>
26500 20:27:50.058156 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.011621>
26500 20:27:50.133522 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.922091>
26500 20:27:52.055897 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.009861>
26500 20:27:52.100258 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.951614>
26500 20:27:54.052096 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.015949>
26500 20:27:54.125715 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.928027>
26500 20:27:56.054076 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.016307>
26500 20:27:56.142113 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.914674>
26500 20:27:58.057043 pwrite64(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, 0) = 512 <0.011393>
26500 20:27:58.099114 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) <1.955300>
Ending the test after ~8 hours, see comment 8 for details. vdsm1: root 14617 0.3 0.0 225540 6596 ? Sl Sep22 2:36 python probe_block_size_stress.py /rhev/data-center/mnt/glusterSD/gluster01.scl.lab.tlv.redhat.com:_storage_ root 14618 0.0 0.0 384948 1044 ? Sl Sep22 0:44 \_ /usr/libexec/ioprocess --read-pipe-fd 7 --write-pipe-fd 6 --max-threads 0 --max-queued-requests -1 [root@storage-ge8-vdsm1 ~]# cat nohup.out | sort -u 512 [root@storage-ge8-vdsm1 ~]# wc -l nohup.out 22400 nohup.out vdsm2: root 19989 0.3 0.0 225540 6596 ? Sl Sep22 2:34 python probe_block_size_stress.py /rhev/data-center/mnt/glusterSD/gluster01.scl.lab.tlv.redhat.com:_storage_ root 19990 0.0 0.0 384948 2832 ? Sl Sep22 0:37 \_ /usr/libexec/ioprocess --read-pipe-fd 7 --write-pipe-fd 6 --max-threads 0 --max-queued-requests -1 [root@storage-ge8-vdsm2 ~]# cat nohup.out | sort -u 512 [root@storage-ge8-vdsm2 ~]# wc -l nohup.out 22396 nohup.out vdsm3: root 26493 0.3 0.0 225540 8636 ? Sl Sep22 2:35 python probe_block_size_stress.py /rhev/data-center/mnt/glusterSD/gluster01.scl.lab.tlv.redhat.com:_storage_ root 26494 0.0 0.0 384948 2832 ? Sl Sep22 0:38 \_ /usr/libexec/ioprocess --read-pipe-fd 7 --write-pipe-fd 6 --max-threads 0 --max-queued-requests -1 [root@storage-ge8-vdsm3 ~]# cat nohup.out | sort -u 512 [root@storage-ge8-vdsm3 ~]# wc -l nohup.out 22390 nohup.out Checking storage for leftover .probe files: [root@storage-ge8-vdsm3 ~]# ls -lh /rhev/data-center/mnt/glusterSD/gluster01.example.com\:_storage__local__ge8__volume__0/.probe* ls: cannot access /rhev/data-center/mnt/glusterSD/gluster01.example.com:_storage__local__ge8__volume__0/.probe*: No such file or directory All hosts issues ~22400 probes during the tests. Since normal probing interval is 5 minutes, this test is equivalent to 1866 hours (77 days). (In reply to Nir Soffer from comment #10) > Ending the test after ~8 hours, see comment 8 for details. > > vdsm1: > > root 14617 0.3 0.0 225540 6596 ? Sl Sep22 2:36 python > probe_block_size_stress.py > /rhev/data-center/mnt/glusterSD/gluster01.scl.lab.tlv.redhat.com:_storage_ > root 14618 0.0 0.0 384948 1044 ? Sl Sep22 0:44 \_ > /usr/libexec/ioprocess --read-pipe-fd 7 --write-pipe-fd 6 --max-threads 0 > --max-queued-requests -1 > > [root@storage-ge8-vdsm1 ~]# cat nohup.out | sort -u > 512 > > [root@storage-ge8-vdsm1 ~]# wc -l nohup.out > 22400 nohup.out > > vdsm2: > > root 19989 0.3 0.0 225540 6596 ? Sl Sep22 2:34 python > probe_block_size_stress.py > /rhev/data-center/mnt/glusterSD/gluster01.scl.lab.tlv.redhat.com:_storage_ > root 19990 0.0 0.0 384948 2832 ? Sl Sep22 0:37 \_ > /usr/libexec/ioprocess --read-pipe-fd 7 --write-pipe-fd 6 --max-threads 0 > --max-queued-requests -1 > > [root@storage-ge8-vdsm2 ~]# cat nohup.out | sort -u > 512 > > [root@storage-ge8-vdsm2 ~]# wc -l nohup.out > 22396 nohup.out > > vdsm3: > > root 26493 0.3 0.0 225540 8636 ? Sl Sep22 2:35 python > probe_block_size_stress.py > /rhev/data-center/mnt/glusterSD/gluster01.scl.lab.tlv.redhat.com:_storage_ > root 26494 0.0 0.0 384948 2832 ? Sl Sep22 0:38 \_ > /usr/libexec/ioprocess --read-pipe-fd 7 --write-pipe-fd 6 --max-threads 0 > --max-queued-requests -1 > > [root@storage-ge8-vdsm3 ~]# cat nohup.out | sort -u > 512 > > [root@storage-ge8-vdsm3 ~]# wc -l nohup.out > 22390 nohup.out > > Checking storage for leftover .probe files: > > [root@storage-ge8-vdsm3 ~]# ls -lh > /rhev/data-center/mnt/glusterSD/gluster01.example.com\: > _storage__local__ge8__volume__0/.probe* > ls: cannot access > /rhev/data-center/mnt/glusterSD/gluster01.example.com: > _storage__local__ge8__volume__0/.probe*: No such file or directory > > > All hosts issues ~22400 probes during the tests. Since normal probing > interval is 5 minutes, this test is equivalent to 1866 hours (77 days). Hi Nir, So these tests are good enough to prove that there no metadata issues with gluster mount, right ? Do you need QE to run these tests again on our setup ? (In reply to SATHEESARAN from comment #11) > (In reply to Nir Soffer from comment #10) > So these tests are good enough to prove that there no metadata issues with > gluster mount, right ? I think they are, but I'm not a gluster expert. I'm pretty sure the new code is not affected by the metadata management issue (https://bugzilla.redhat.com/show_bug.cgi?id=1751722#c13), but this code is writing to deleted files, which is not typical use pattern, so it can expose other issues in gluster. > Do you need QE to run these tests again on our setup ? I think it would be nice to run this on real storage which can be 100x times faster than the setup I tested for an hour or so. Created attachment 1618048 [details]
reproduce script for vdsm <= 4.3.6
To reproduce the original issue, run this script on multiple hosts, probing
the same gluster volume mount:
# python reproduce.py /rhev/data-center/mnt/glusterSD/server:_path
512
512
...
Expected results:
- After some time gluster mount fuse helper will crash.
- size of /rhev/data-center/mnt/glusterSD/server:_path/__DIRECT_IO_TEST__
will be negative, seen as 16 EiB in ls.
(In reply to SATHEESARAN from comment #11) >> Status: POST → NEW >> CC: nsoffer >> Flags: needinfo?(sasundar) needinfo?(kdhananj) → needinfo?(nsoffer) Accidentally changed the state of the bug from POST to NEW. Restoring the same (In reply to Nir Soffer from comment #9) > Created attachment 1617908 [details] > ioprocess trace showing extremely slow unaligned writes > > Krutika, should we file a bug about the slow unaligned writes? > > # grep pwrite64 ioprocess.trace | head -20 > 26500 20:27:40.037730 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.009804> > 26500 20:27:40.133382 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.911269> > 26500 20:27:42.044969 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.012527> > 26500 20:27:42.105923 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.940483> > 26500 20:27:44.046644 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.028278> > 26500 20:27:44.162981 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.883644> > 26500 20:27:46.046896 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.012689> > 26500 20:27:46.135306 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.912911> > 26500 20:27:48.048518 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.011936> > 26500 20:27:48.105805 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.952041> > 26500 20:27:50.058156 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.011621> > 26500 20:27:50.133522 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.922091> > 26500 20:27:52.055897 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.009861> > 26500 20:27:52.100258 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.951614> > 26500 20:27:54.052096 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.015949> > 26500 20:27:54.125715 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.928027> > 26500 20:27:56.054076 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.016307> > 26500 20:27:56.142113 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.914674> > 26500 20:27:58.057043 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.011393> > 26500 20:27:58.099114 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.955300> Restoring needinfo on Krutika, which I removed accidentally while adddressing a needinfo on me Nir / Avihai, I see there are good amount of testing with the latest ioprocess. As the next step, if these fixes are available as part of the build - vdsm & ioprocess - we can do a full fledged sanity testing with these changes in place. (In reply to SATHEESARAN from comment #17) We will release new ioprocess today with these changes. New vdsm requiring this ioprocess will be available later this week (bug 1753898). (In reply to Nir Soffer from comment #9) > Created attachment 1617908 [details] > ioprocess trace showing extremely slow unaligned writes > > Krutika, should we file a bug about the slow unaligned writes? > > # grep pwrite64 ioprocess.trace | head -20 > 26500 20:27:40.037730 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.009804> > 26500 20:27:40.133382 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.911269> > 26500 20:27:42.044969 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.012527> > 26500 20:27:42.105923 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.940483> > 26500 20:27:44.046644 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.028278> > 26500 20:27:44.162981 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.883644> > 26500 20:27:46.046896 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.012689> > 26500 20:27:46.135306 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.912911> > 26500 20:27:48.048518 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.011936> > 26500 20:27:48.105805 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.952041> > 26500 20:27:50.058156 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.011621> > 26500 20:27:50.133522 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.922091> > 26500 20:27:52.055897 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.009861> > 26500 20:27:52.100258 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.951614> > 26500 20:27:54.052096 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.015949> > 26500 20:27:54.125715 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.928027> > 26500 20:27:56.054076 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.016307> > 26500 20:27:56.142113 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.914674> > 26500 20:27:58.057043 pwrite64(0, > "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 512, > 0) = 512 <0.011393> > 26500 20:27:58.099114 pwrite64(0, "\0", 1, 0) = -1 EINVAL (Invalid argument) > <1.955300> Sorry, was out sick. Can you capture and share the gluster volume profile output? # gluster volume profile $VOLNAME start # run the test # gluster volume profile $VOLNAME info > brick-profile.out # gluster volume profile $VOLNAME stop and share the "brick-profile.out" file? -Krutika Moving the bug to verified based on the below results: Version: ======= glusterfs-3.12.2-47.5.el7rhgs.x86_64 ioprocess-1.3.0-1.el7ev.x86_64 vdsm-4.30.33-1.el7ev.x86_64 1. Complete the RHHI-V deployment 2. Create a RHEL 7.6 template 3. Create 30 vm's using pool 4. After an hour delete the vm's in the pool 5. Repeat step 3 and step 4 for 3 iteration's 6. Create vm's using template and ran kernel untar and FIO Couldn't see any crash during the test. So moving the bug to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3030 |