Bug 1693540
| Summary: | Mount the bricks with XFS UUID instead of device names | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | SATHEESARAN <sasundar> | |
| Component: | gluster-ansible | Assignee: | Sachidananda Urs <surs> | |
| Status: | CLOSED ERRATA | QA Contact: | bipin <bshetty> | |
| Severity: | high | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | rhgs-3.4 | CC: | amukherj, bshetty, godas, pasik, rhs-bugs, sabose, sasundar, surs | |
| Target Milestone: | --- | Keywords: | ZStream | |
| Target Release: | RHGS 3.4.z Async Update | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | gluster-ansible-infra-1.0.4-1 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | 1670722 | |||
| : | 1734376 (view as bug list) | Environment: | ||
| Last Closed: | 2019-10-03 07:58:12 UTC | Type: | --- | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1734376 | |||
|
Description
SATHEESARAN
2019-03-28 06:43:16 UTC
Can we use disk UUIDs to create bricks or its only the filesystem mounting that can use XFS UUIDs ? (In reply to SATHEESARAN from comment #1) > Can we use disk UUIDs to create bricks or its only the filesystem mounting > that can use XFS UUIDs ? sas, UUIDs are created once we create LVM/Filesystem on device. We will mount using UUID. Verified the bug using the below components: =========================================== gluster-ansible-roles-1.0.5-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-1.el7rhgs.noarch Steps: ===== 1.Start the gluster deployment 2.Once completed check the fstab entries if UUID is present Output: ====== # # /etc/fstab # Created by anaconda on Wed May 8 11:52:24 2019 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/rhvh_rhsqa-grafton7-nic2/rhvh-4.3.0.6-0.20190418.0+1 / ext4 defaults,discard 1 1 UUID=7e246924-88d8-41f4-a97e-7f70ad3aed43 /boot ext4 defaults 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-home /home ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-tmp /tmp ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-var /var ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-var_log /var/log ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-var_log_audit /var/log/audit ext4 defaults,discard 1 2 /dev/mapper/rhvh_rhsqa--grafton7--nic2-swap swap swap defaults 0 0 UUID=5afe8908-7ce1-4ef1-91c6-1f2cc3b7fd28 /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 UUID=3a34b40c-5a89-4c26-b846-2982c7407f04 /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0 UUID=df372c7a-58a1-4e1c-bb45-c6c0d0316de3 /gluster_bricks/data xfs inode64,noatime,nodiratime 0 This is the change with the latest gluster-ansible, that it uses XFS UUID to mount instead of direct path. So I consider this as the change from previous version of RHHI deployment module. Consider this bug for release notes Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2557 |