Description of problem: ----------------------- Kernel Filesystem service is started ahead of VDO service. To enable VDO service to start ahead, there is a special mount option - 'x-systemd.requires=vdo.service' - that needs to be added to the XFS fstab entry. Example: /dev/gluster_vg_sdc/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 But the gluster-ansible role is not adding this special mount option to fstab entry, if VDO is enabled Version-Release number of selected component (if applicable): ------------------------------------------------------------- gluster-ansible-role-1.0 How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create a playbook that enabled VDO volumes & creates bricks up on that. 2. Observe the /etc/fstab Actual results: --------------- fstab entry doesn't have the special mount option - 'x-systemd.requires=vdo.service', and this leads to reboot of hosts going to maintenance shell as the filesystems are not available Expected results: ----------------- If XFS bricks are created on the VDO volume, fstab entry should have additional mount option - 'x-systemd.requires=vdo.service'
Patch: https://github.com/gluster/gluster-ansible-infra/commit/db8fc7 should fix the issue.
Tested with gluster-ansible-role-1.0.3 When XFS filesystem created over VDO volume, fstab entry is added with the required options /dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdc/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3428