RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2070777 - Allow certain VDO volume properties to be changed after creation
Summary: Allow certain VDO volume properties to be changed after creation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: lvm2
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks: 2100608
TreeView+ depends on / blocked
 
Reported: 2022-03-31 22:13 UTC by bjohnsto
Modified: 2023-11-07 11:27 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.03.21-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2100608 (view as bug list)
Environment:
Last Closed: 2023-11-07 08:53:27 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CLUSTERQE-5848 0 None None None 2022-07-20 19:03:51 UTC
Red Hat Issue Tracker RHELPLAN-117582 0 None None None 2022-03-31 22:25:09 UTC
Red Hat Product Errata RHBA-2023:6633 0 None None None 2023-11-07 08:53:53 UTC

Description bjohnsto 2022-03-31 22:13:25 UTC
Description of problem:

After volume creation, VDO allows for modification of certain properties like thread counts, etc. Currently there is no way to do this through LVM. 


Version-Release number of selected component (if applicable):


How reproducible:

Always

Steps to Reproduce:
1.Create a VDO volume with 3 physical threads.
2.Try to change that value to 2.
3.

Actual results:

No way to do so.

Expected results:

An lvm command to allow this to happen.

Additional info:

Comment 1 Zdenek Kabelac 2022-04-11 09:09:37 UTC
With lvchange we should support these modifiable for already existing vdo pool:

vdo_max_discard
vdo_block_map_period
vdo_block_map_cache_size_mb
vdo_ack_threads
vdo_bio_threads
vdo_bio_rotation
vdo_cpu_threads
vdo_hash_zone_threads
vdo_logical_threads
vdo_physical_threads

To handle this in some standard lvm2 way - we will introduce support for  --vdosettings options (just like with --cachesettings)

Comment 2 Andy Walsh 2022-04-13 18:59:32 UTC
I believe some (all?) of these settings require the volume to be completely stopped and started to take effect.  A simple suspend/resume is not sufficient to put these in place.

Bruce, can you confirm this?

Comment 3 Zdenek Kabelac 2022-04-14 08:42:23 UTC
Ok - this would be good to know if we have some of those that can be applied by just 'suspend/resume' n some that need full deactivation and activation.

lvchange  can print info message when various options do take effect - but I'd need to futher enhance code - so the internal API can give proper info to the user about when the change will happen for real.

So -  all changes always will happen with next activation ?

Lvm2 can deactivate and activate  VDO LV  if it is unused - do we want this ? 
(or just message such like 'Change will apply with next activation...'  is what we want)

Comment 4 Zdenek Kabelac 2022-05-25 12:38:03 UTC
So here is the current list of implemented support:

=== VDOPOOL Lvchange Offline === 

ack_threads
bio_rotation
bio_threads
block_map_cache_size_mb
block_map_era_length
block_map_period // alias for block_map_era_length
cpu_threads
hash_zone_threads
logical_threads
max_discard
physical_threads


=== VDOPOOL Lvchange Online ===

use_compression
use_deduplication    

=== VDOPOOL NO Lvchange  (only lvcreate/lvconvert) ===

check_point_frequency
index_memory_size_mb
minimum_io_size
slab_size_mb
use_metadata_hints
use_sparse_index


Supported syntax for --vdosettings option

lvcreate --vdosettings 'vdo_cpu_threads=1'....
lvcreate --vdosettings 'cputhreads=1'....

Prefixes 'vdo_'  & 'vdo_use_' can be skip - as well as any '_' in names.

With upstream patch (man & tests included):

https://listman.redhat.com/archives/lvm-devel/2022-May/024180.html

Comment 11 Zdenek Kabelac 2023-02-12 17:04:07 UTC
Current upstream functionality should be matching documentation from comment 4.
(lvs needs to follow documented full names of vdo settings).

Comment 15 Corey Marthaler 2023-06-20 17:18:06 UTC
According to comment #4, "block_map_cache_size_mb" should now work.

Is it your design that *ONLY* "block_map_cache_size_mb" works for setting? and none of the other variants work (like they do with the other attributes)? And that "block_map_cache_size_mb" *DOESN'T* work to view the attribute? *ONLY* "vdo_block_map_cache_size" works for viewing. 

So, the way to set will NOT work with viewing it, and the way to view it will NOT work with setting?

[root@virt-499 ~]# lvs -a -o +devices
  LV               VG            Attr       LSize   Pool     Origin Data%   Devices          
  vdo_lv           vdo_sanity    vwi-a-v--- 100.00g vdo_pool        0.00    vdo_pool(0)                     
  vdo_pool         vdo_sanity    dwi-------  10.00g                 40.04   vdo_pool_vdata(0)               
  [vdo_pool_vdata] vdo_sanity    Dwi-ao----  10.00g                         /dev/sda1(0)                    

[root@virt-499 ~]# lvs -a -o +devices,block_map_cache_size_mb
[...]
  Unrecognised field: block_map_cache_size_mb

[root@virt-499 ~]# lvs -a -o +devices,block_map_cache_size
[...]
  Unrecognised field: block_map_cache_size

[root@virt-499 ~]# lvs -a -o +devices,vdo_block_map_cache_size_mb
[...]
  Unrecognised field: vdo_block_map_cache_size_mb

[root@virt-499 ~]# lvs -a -o +devices,vdo_block_map_cache_size
  LV               VG            Attr       LSize   Pool     Origin Data%   Devices           VDOBlockMapCacheSize
  vdo_lv           vdo_sanity    vwi-a-v--- 100.00g vdo_pool        0.00    vdo_pool(0)                    128.00m
  vdo_pool         vdo_sanity    dwi-------  10.00g                 40.04   vdo_pool_vdata(0)              128.00m
  [vdo_pool_vdata] vdo_sanity    Dwi-ao----  10.00g                         /dev/sda1(0)                          


[root@virt-499 ~]# vgchange -an vdo_sanity
  0 logical volume(s) in volume group "vdo_sanity" now active


[root@virt-499 ~]# lvchange --vdosettings 'vdo_block_map_cache_size_mb=256' vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.
[root@virt-499 ~]# lvchange --vdosettings 'block_map_cache_size_mb=256' vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.

[root@virt-499 ~]# lvchange --vdosettings 'block_map_cache_size=256' vdo_sanity/vdo_lv
  Unknown VDO setting "block_map_cache_size".
[root@virt-499 ~]# lvchange --vdosettings 'vdo_block_map_cache_size=256' vdo_sanity/vdo_lv
  Unknown VDO setting "vdo_block_map_cache_size".

Comment 16 Zdenek Kabelac 2023-06-20 17:27:02 UTC
Correct.

The input side with  _mb  suffix is there to emphasize the input number is using 'fixed' MiB unit.

lvs output side is on the other hand 'unit-free' and can be printed in any user-custom output format.

Some form of 'aliasing' argument has not yet been designed.

Comment #4 is about input parameters for --vdosettings option.

Output parameters for 'lvs' are different as they have different capabilities.

However the minor 'potential' here for user's confusion is seen.

Comment 19 Corey Marthaler 2023-07-06 21:34:38 UTC
Marking Verified:Tested with the caveats listed in the other bugs to come from this one.

kernel-5.14.0-332.el9    BUILT: Mon Jun 26 06:16:51 PM CEST 2023
lvm2-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023
lvm2-libs-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023



SCENARIO - offline_vdo_property_alteration:  Create a vdo volume, deactivate it, then change the supported OFFLINE properties post creation (bug 2100608|2070777|2108239) 

OFFLINE vdo alteration properties/limits to attempt:
ack_threads 100
bio_rotation 1024
bio_threads 100
block_map_cache_size_mb 16777215
block_map_era_length 16380

deactivating LV vdo_lv on virt-482.cluster-qe.lab.eng.brq.redhat.com
lvchange --yes -an  vdo_sanity/vdo_lv

----------------------------------------
[ack_threads]
LIMIT attempt ack_threads setting for /dev/vdo_sanity/vdo_lv: 100
lvchange --vdosettings 'ack_threads=101' /dev/vdo_sanity/vdo_lv
VDO ack threads 101 is out of range [0..100].
CURRENT vdo_ack_threads for /dev/vdo_sanity/vdo_lv: 1
lvchange --vdosettings 'ack_threads=2' /dev/vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.
ALTERED vdo_ack_threads setting for /dev/vdo_sanity/vdo_lv: 2

----------------------------------------
[bio_rotation]
LIMIT attempt bio_rotation setting for /dev/vdo_sanity/vdo_lv: 1024
lvchange --vdosettings 'bio_rotation=1025' /dev/vdo_sanity/vdo_lv
VDO bio rotation 1025 is out of range [1..1024].
CURRENT vdo_bio_rotation for /dev/vdo_sanity/vdo_lv: 64
lvchange --vdosettings 'bio_rotation=65' /dev/vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.
ALTERED vdo_bio_rotation setting for /dev/vdo_sanity/vdo_lv: 65

----------------------------------------
[bio_threads]
LIMIT attempt bio_threads setting for /dev/vdo_sanity/vdo_lv: 100
lvchange --vdosettings 'bio_threads=101' /dev/vdo_sanity/vdo_lv
VDO bio threads 101 is out of range [1..100].
CURRENT vdo_bio_threads for /dev/vdo_sanity/vdo_lv: 4
lvchange --vdosettings 'bio_threads=5' /dev/vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.
ALTERED vdo_bio_threads setting for /dev/vdo_sanity/vdo_lv: 5

----------------------------------------
[block_map_cache_size_mb]
LIMIT attempt block_map_cache_size_mb setting for /dev/vdo_sanity/vdo_lv: 16777215
lvchange --vdosettings 'block_map_cache_size_mb=16777216' /dev/vdo_sanity/vdo_lv
VDO block map cache size 16777216 MiB is out of range [128..16777215].
CURRENT vdo_block_map_cache_size for /dev/vdo_sanity/vdo_lv: 128.00m
lvchange --vdosettings 'block_map_cache_size_mb=129' /dev/vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.
ALTERED vdo_block_map_cache_size setting for /dev/vdo_sanity/vdo_lv: 129.00m

----------------------------------------
[block_map_era_length]
LIMIT attempt block_map_era_length setting for /dev/vdo_sanity/vdo_lv: 16380
lvchange --vdosettings 'block_map_era_length=16381' /dev/vdo_sanity/vdo_lv
VDO block map era length 16381 is out of range [1..16380].
CURRENT vdo_block_map_era_length for /dev/vdo_sanity/vdo_lv: 16380
lvchange --vdosettings 'block_map_era_length=16379' /dev/vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.
ALTERED vdo_block_map_era_length setting for /dev/vdo_sanity/vdo_lv: 16379



SCENARIO - online_vdo_property_alteration:  Create a vdo volume, then change the supported ONLINE properties post creation (bug 2100608|2070777) 

lvconvert --yes --type vdo-pool -n vdo_lv  -V100G vdo_sanity/vdo_pool
The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.
WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formatting.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
vdo alteration properties to attempt:

[compression]
PRE compression setting for /dev/vdo_sanity/vdo_lv: True
Setting compression to False for vdo_sanity/vdo_lv
lvchange --compression n vdo_sanity/vdo_pool
  Logical volume vdo_sanity/vdo_pool changed.
POST1 compression setting for /dev/vdo_sanity/vdo_lv: False

lvchange --vdosettings 'use_compression=1' /dev/vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.
POST2 compression setting for /dev/vdo_sanity/vdo_lv: enabled

[deduplication]
PRE deduplication setting for /dev/vdo_sanity/vdo_lv: True
Setting deduplication to False for vdo_sanity/vdo_lv
lvchange --deduplication n vdo_sanity/vdo_pool
  Logical volume vdo_sanity/vdo_pool changed.
POST1 deduplication setting for /dev/vdo_sanity/vdo_lv: False

lvchange --vdosettings 'use_deduplication=1' /dev/vdo_sanity/vdo_lv
  Logical volume vdo_sanity/vdo_lv changed.
POST2 deduplication setting for /dev/vdo_sanity/vdo_lv: enabled




SCENARIO - conversion_vdo_property_alteration:  Create a pool volume, then change the supported CONVERSION properties during vdo conversion (bug 2100608|2070777|2108227|2108254#c2) 

Setting [create]: --vdosettings 'index_memory_size_mb=512'
lvcreate --yes --type vdo -n index_memory_size_mb --vdosettings 'index_memory_size_mb=512' -L 10G vdo_sanity -V100G  
Wiping vdo signature on /dev/vdo_sanity/vpool0.
    The VDO volume can address 4 GB in 2 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "index_memory_size_mb" created.

post vdo creation value:512.00m

lvremove  -f vdo_sanity/index_memory_size_mb
Logical volume "index_memory_size_mb" successfully removed.

Setting [convert]: --vdosettings 'index_memory_size_mb=512'
lvcreate --yes --type linear -n pool_index_memory_size_mb  -L 10G vdo_sanity  
Wiping vdo signature on /dev/vdo_sanity/pool_index_memory_size_mb.
  Logical volume "pool_index_memory_size_mb" created.

lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'index_memory_size_mb=512' -V100G vdo_sanity/pool_index_memory_size_mb
The VDO volume can address 4 GB in 2 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/pool_index_memory_size_mb to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.
WARNING: Converting logical volume vdo_sanity/pool_index_memory_size_mb to VDO pool volume with formatting.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
post vdo conversion value:512.00m

lvremove  -f vdo_sanity/vdo_lv
Logical volume "vdo_lv" successfully removed.



----------------------------------------
Setting [create]: --vdosettings 'slab_size_mb=4000'
lvcreate --yes --type vdo -n slab_size_mb --vdosettings 'slab_size_mb=4000' -L 10G vdo_sanity -V100G  
Wiping vdo signature on /dev/vdo_sanity/vpool0.
    The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "slab_size_mb" created.

post vdo creation value:<3.91g

lvremove  -f vdo_sanity/slab_size_mb
Logical volume "slab_size_mb" successfully removed.

Setting [convert]: --vdosettings 'slab_size_mb=4000'
lvcreate --yes --type linear -n pool_slab_size_mb  -L 10G vdo_sanity  
Wiping vdo signature on /dev/vdo_sanity/pool_slab_size_mb.
  Logical volume "pool_slab_size_mb" created.

lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'slab_size_mb=4000' -V100G vdo_sanity/pool_slab_size_mb
The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/pool_slab_size_mb to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.
WARNING: Converting logical volume vdo_sanity/pool_slab_size_mb to VDO pool volume with formatting.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
post vdo conversion value:<3.91g

lvremove  -f vdo_sanity/vdo_lv
Logical volume "vdo_lv" successfully removed.



----------------------------------------
Setting [create]: --vdosettings 'use_sparse_index=0'
lvcreate --yes --type vdo -n use_sparse_index --vdosettings 'use_sparse_index=0' -L 10G vdo_sanity -V100G  
Wiping vdo signature on /dev/vdo_sanity/vpool0.
    The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "use_sparse_index" created.

post vdo creation value:

lvremove  -f vdo_sanity/use_sparse_index
Logical volume "use_sparse_index" successfully removed.

Setting [convert]: --vdosettings 'use_sparse_index=0'
lvcreate --yes --type linear -n pool_use_sparse_index  -L 10G vdo_sanity  
Wiping vdo signature on /dev/vdo_sanity/pool_use_sparse_index.
  Logical volume "pool_use_sparse_index" created.

lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'use_sparse_index=0' -V100G vdo_sanity/pool_use_sparse_index
The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/pool_use_sparse_index to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.
WARNING: Converting logical volume vdo_sanity/pool_use_sparse_index to VDO pool volume with formatting.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
post vdo conversion value:

lvremove  -f vdo_sanity/vdo_lv
Logical volume "vdo_lv" successfully removed.



----------------------------------------
Setting [create]: --vdosettings 'minimum_io_size=8'
lvcreate --yes --type vdo -n minimum_io_size --vdosettings 'minimum_io_size=8' -L 10G vdo_sanity -V100G  
Wiping vdo signature on /dev/vdo_sanity/vpool0.
    The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "minimum_io_size" created.

post vdo creation value:4.00k

lvremove  -f vdo_sanity/minimum_io_size
Logical volume "minimum_io_size" successfully removed.

Setting [convert]: --vdosettings 'minimum_io_size=8'
lvcreate --yes --type linear -n pool_minimum_io_size  -L 10G vdo_sanity  
Wiping vdo signature on /dev/vdo_sanity/pool_minimum_io_size.
  Logical volume "pool_minimum_io_size" created.

lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'minimum_io_size=8' -V100G vdo_sanity/pool_minimum_io_size
The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/pool_minimum_io_size to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.
WARNING: Converting logical volume vdo_sanity/pool_minimum_io_size to VDO pool volume with formatting.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
post vdo conversion value:4.00k

lvremove  -f vdo_sanity/vdo_lv
Logical volume "vdo_lv" successfully removed.



----------------------------------------
Setting [create]: --vdosettings 'metadata_hints=0'
lvcreate --yes --type vdo -n metadata_hints --vdosettings 'metadata_hints=0' -L 10G vdo_sanity -V100G  
Wiping vdo signature on /dev/vdo_sanity/vpool0.
    The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "metadata_hints" created.

post vdo creation value:

lvremove  -f vdo_sanity/metadata_hints
Logical volume "metadata_hints" successfully removed.

Setting [convert]: --vdosettings 'metadata_hints=0'
lvcreate --yes --type linear -n pool_metadata_hints  -L 10G vdo_sanity  
Wiping vdo signature on /dev/vdo_sanity/pool_metadata_hints.
  Logical volume "pool_metadata_hints" created.

lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'metadata_hints=0' -V100G vdo_sanity/pool_metadata_hints
The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/pool_metadata_hints to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.
WARNING: Converting logical volume vdo_sanity/pool_metadata_hints to VDO pool volume with formatting.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
post vdo conversion value:

Comment 24 errata-xmlrpc 2023-11-07 08:53:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:6633


Note You need to log in before you can comment on or make changes to this bug.