Description of problem: ----------------------- Kernel Filesystem service is started ahead of VDO service. To enable VDO service to start ahead, there is a special mount option - 'x-systemd.requires=vdo.service' - that needs to be added to the XFS fstab entry. Example: /dev/gluster_vg_sdc/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 But the gluster-ansible role is not adding this special mount option to fstab entry, if VDO is enabled Version-Release number of selected component (if applicable): ------------------------------------------------------------- gluster-ansible-role-1.0 How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create a playbook that enabled VDO volumes & creates bricks up on that. 2. Observe the /etc/fstab Actual results: --------------- fstab entry doesn't have the special mount option - 'x-systemd.requires=vdo.service', and this leads to reboot of hosts going to maintenance shell as the filesystems are not available Expected results: ----------------- If XFS bricks are created on the VDO volume, fstab entry should have additional mount option - 'x-systemd.requires=vdo.service'
Patch: https://github.com/gluster/gluster-ansible-infra/commit/db8fc7 should fix the issue.
The dependent gluster-ansible bug is already in MODIFIED state, changing the state of this bug accordingly
The dependent bug is already in ON_QA state, changing the state of this bug too
Tested with gluster-ansible-role-1.0.3 When XFS filesystem created over VDO volume, fstab entry is added with the required options /dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdc/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0