Description of problem: ----------------------- XFS filesystems ( gluster bricks ) are mounted with entries in /etc/fstab, where the device name is used. It would be good to use XFS UUID to mount the same Version-Release number of selected component (if applicable): ------------------------------------------------------------- RHGS 3.4.4 gluster-ansible-roles-1.0.3-3 How reproducible: ----------------- Always Steps to Reproduce: -------------------- 1. Use HC role, to create bricks Actual results: ---------------- Bricks should be mounted using device names Expected results: ----------------- Bricks should be mounted using XFS UUIDs Additional info:
Can we use disk UUIDs to create bricks or its only the filesystem mounting that can use XFS UUIDs ?
(In reply to SATHEESARAN from comment #1) > Can we use disk UUIDs to create bricks or its only the filesystem mounting > that can use XFS UUIDs ? sas, UUIDs are created once we create LVM/Filesystem on device. We will mount using UUID.
https://github.com/gluster/gluster-ansible-infra/pull/55
Verified the bug using the below components: =========================================== gluster-ansible-roles-1.0.5-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-1.el7rhgs.noarch Steps: ===== 1.Start the gluster deployment 2.Once completed check the fstab entries if UUID is present Output: ====== # # /etc/fstab # Created by anaconda on Wed May 8 11:52:24 2019 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/rhvh_rhsqa-grafton7-nic2/rhvh-4.3.0.6-0.20190418.0+1 / ext4 defaults,discard 1 1 UUID=7e246924-88d8-41f4-a97e-7f70ad3aed43 /boot ext4 defaults 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-home /home ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-tmp /tmp ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-var /var ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-var_log /var/log ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-var_log_audit /var/log/audit ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-swap swap swap defaults 0 0 UUID=5afe8908-7ce1-4ef1-91c6-1f2cc3b7fd28 /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 UUID=3a34b40c-5a89-4c26-b846-2982c7407f04 /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0 UUID=df372c7a-58a1-4e1c-bb45-c6c0d0316de3 /gluster_bricks/data xfs inode64,noatime,nodiratime 0
This is the change with the latest gluster-ansible, that it uses XFS UUID to mount instead of direct path. So I consider this as the change from previous version of RHHI deployment module. Consider this bug for release notes
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2557