Bug 1693540 - Mount the bricks with XFS UUID instead of device names
Summary: Mount the bricks with XFS UUID instead of device names
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.4.z Async Update
Assignee: Sachidananda Urs
QA Contact: bipin
URL:
Whiteboard:
Depends On:
Blocks: 1734376
TreeView+ depends on / blocked
 
Reported: 2019-03-28 06:43 UTC by SATHEESARAN
Modified: 2019-10-03 07:58 UTC (History)
8 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1670722
: 1734376 (view as bug list)
Environment:
Last Closed: 2019-10-03 07:58:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2557 0 None None None 2019-10-03 07:58:34 UTC

Description SATHEESARAN 2019-03-28 06:43:16 UTC
Description of problem:
-----------------------
XFS filesystems ( gluster bricks ) are mounted with entries in /etc/fstab, where the device name is used. It would be good to use XFS UUID to mount the same

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHGS 3.4.4
gluster-ansible-roles-1.0.3-3

How reproducible:
-----------------
Always

Steps to Reproduce:
--------------------
1. Use HC role, to create bricks

Actual results:
----------------
Bricks should be mounted using device names

Expected results:
-----------------
Bricks should be mounted using XFS UUIDs


Additional info:

Comment 1 SATHEESARAN 2019-03-28 06:43:59 UTC
Can we use disk UUIDs to create bricks or its only the filesystem mounting that can use XFS UUIDs ?

Comment 2 Sachidananda Urs 2019-03-28 09:04:23 UTC
(In reply to SATHEESARAN from comment #1)
> Can we use disk UUIDs to create bricks or its only the filesystem mounting
> that can use XFS UUIDs ?

sas, UUIDs are created once we create LVM/Filesystem on device.
We will mount using UUID.

Comment 3 Sachidananda Urs 2019-04-04 14:24:33 UTC
https://github.com/gluster/gluster-ansible-infra/pull/55

Comment 5 bipin 2019-05-22 11:28:24 UTC
Verified the bug using the below components:
===========================================
gluster-ansible-roles-1.0.5-1.el7rhgs.noarch
gluster-ansible-infra-1.0.4-1.el7rhgs.noarch

Steps:
=====
1.Start the gluster deployment
2.Once completed check the fstab entries if UUID is present

Output:
======
#
# /etc/fstab
# Created by anaconda on Wed May  8 11:52:24 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/rhvh_rhsqa-grafton7-nic2/rhvh-4.3.0.6-0.20190418.0+1 / ext4 defaults,discard 1 1
UUID=7e246924-88d8-41f4-a97e-7f70ad3aed43 /boot                   ext4    defaults        1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-home /home ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-tmp /tmp ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-var /var ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-var_log /var/log ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-var_log_audit /var/log/audit ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-swap swap                    swap    defaults        0 0
UUID=5afe8908-7ce1-4ef1-91c6-1f2cc3b7fd28 /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
UUID=3a34b40c-5a89-4c26-b846-2982c7407f04 /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0
UUID=df372c7a-58a1-4e1c-bb45-c6c0d0316de3 /gluster_bricks/data xfs inode64,noatime,nodiratime 0

Comment 6 SATHEESARAN 2019-06-01 02:52:27 UTC
This is the change with the latest gluster-ansible, that it uses XFS UUID to mount instead of direct path.
So I consider this as the change from previous version of RHHI deployment module.

Consider this bug for release notes

Comment 12 errata-xmlrpc 2019-10-03 07:58:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2557


Note You need to log in before you can comment on or make changes to this bug.