Description of problem: ----------------------- When creating the bricks on the top of LUKS device, those devices have mount options similar to VDO devices added in /etc/fstab Version-Release number of selected component (if applicable): -------------------------------------------------------------- gluster-ansible-infra-1.0.4-10.el8rhgs RHVH-4.4.1 How reproducible: ------------------ Always Steps to Reproduce: --------------------- 1. Provide the LUKS device for bricks for engine, vmstore, data 2. Enable VDO only on one brick 3. Check /etc/fstab mount options for the bricks for all the bricks Actual results: --------------- All the bricks contains the mount option relevant for VDO volumes Expected results: ------------------ Only the bricks created on top of VDO volumes should have the relevant VDO options
Even when data and vmstore are the only bricks created on top of VDO volume, the other bricks engine and testvol still have mount options relevant for VDO UUID=eaf61294-73f2-4c27-8622-48149a98b8eb /gluster_bricks/engine xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 UUID=b99d1bae-c289-4a5b-bff3-7dd066b29f45 /gluster_bricks/data xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 UUID=c6810c7f-7015-414d-aea9-ad991852aa2f /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 UUID=c274902f-65e7-4fde-a1c5-ab5cf381bd5f /gluster_bricks/testvol xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
Upstream patch[1] is already posted [1] - https://github.com/gluster/gluster-ansible-infra/pull/102
the fix is included in the build - gluster-ansible-infra-1.0.4-11.el8rhgs.noarch.rpm
Tested with gluster-ansible-infra-1.0.4-11.el8rhgs When LUKS devices are used as disks, the fstab mount options no longer reflect the VDO mount options UUID=eaf61294-73f2-4c27-8622-48149a98b8eb /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0 UUID=b99d1bae-c289-4a5b-bff3-7dd066b29f45 /gluster_bricks/data xfs inode64,noatime,nodiratime 0 0 UUID=c6810c7f-7015-414d-aea9-ad991852aa2f /gluster_bricks/vmstore xfs inode64,noatime,nodiratime 0 0 UUID=c274902f-65e7-4fde-a1c5-ab5cf381bd5f /gluster_bricks/testvol xfs inode64,noatime,nodiratime 0 0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3121