Description of problem: ------------------------ XFS filesystems ( gluster bricks ) created on the VDO volumes, requires special mount options to facilitate it to get mounted post VDO service is started. But the fstab entries need not be updated for the gluster bricks on non-VDO volume Version-Release number of selected component (if applicable): -------------------------------------------------------------- gluster-ansible-roles-1.0.4 How reproducible: ------------------ Always Steps to Reproduce: ------------------- 1. Create a brick on non-VDO & VDO device using gdeploy conf file 2. Check for the fstab entry for the gluster bricks created on the non-VDO volume. Actual results: --------------- fstab entry for the gluster bricks residing on non-VDO has the updated mount options added Expected results: ------------------ fstab entry for the gluster bricks residing on the VDO volume only should have updated mount options Additional info: ---------------- 1. lsblk output: ----------------- sdb 8:16 0 931G 0 disk └─gluster_vg_sdb-gluster_lv_engine 253:10 0 100G 0 lvm sdc 8:32 0 18.2T 0 disk └─vdo_sdc 253:20 0 160T 0 vdo ├─gluster_vg_sdc-gluster_lv_data 253:21 0 12T 0 lvm /gluster_bricks/data └─gluster_vg_sdc-gluster_lv_vmstore 253:22 0 4T 0 lvm /gluster_bricks/vmstore gluster_lv_engine is created on 'sdb', whereas the gluster_lv_data & gluster_lv_vmstore are created on VDO volume /dev/mapper/vdo_sdc 2. Look for the fstab entries ------------------------------ /dev/gluster_vg_sdb/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 Note that from above /dev/gluster_vg_sdb/gluster_lv_engine also has 'x-systemd.requires=vdo.service", although this LV is not on VDO volume
This was fixed while fixing a similar issue for gdeploy. Merged PR: https://github.com/gluster/gluster-ansible-infra/pull/42
Tested with gluster-ansible-infra-1.0.3 fstab entries for the filesystem on VDO volume only has the relevant VDO options. Observed the following in /etc/fstab file with the mix of VDO & non-VDO: <snip> /dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdb/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0 </snip>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0661