Bug 1734376 - Mount the bricks with XFS UUID instead of device names
Summary: Mount the bricks with XFS UUID instead of device names
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHHI-V 1.6.z Async Update
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1693540
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-30 11:53 UTC by SATHEESARAN
Modified: 2019-10-03 12:24 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1693540
Environment:
Last Closed: 2019-10-03 12:24:01 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2963 0 None None None 2019-10-03 12:24:08 UTC

Description SATHEESARAN 2019-07-30 11:53:36 UTC
+++ This bug was initially created as a clone of Bug #1693540 +++

Description of problem:
-----------------------
XFS filesystems ( gluster bricks ) are mounted with entries in /etc/fstab, where the device name is used. It would be good to use XFS UUID to mount the same

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHGS 3.4.4
gluster-ansible-roles-1.0.3-3

How reproducible:
-----------------
Always

Steps to Reproduce:
--------------------
1. Use HC role, to create bricks

Actual results:
----------------
Bricks should be mounted using device names

Expected results:
-----------------
Bricks should be mounted using XFS UUIDs


Additional info:

--- Additional comment from SATHEESARAN on 2019-03-28 06:43:59 UTC ---

Can we use disk UUIDs to create bricks or its only the filesystem mounting that can use XFS UUIDs ?

--- Additional comment from Sachidananda Urs on 2019-03-28 09:04:23 UTC ---

(In reply to SATHEESARAN from comment #1)
> Can we use disk UUIDs to create bricks or its only the filesystem mounting
> that can use XFS UUIDs ?

sas, UUIDs are created once we create LVM/Filesystem on device.
We will mount using UUID.

--- Additional comment from Sachidananda Urs on 2019-04-04 14:24:33 UTC ---

https://github.com/gluster/gluster-ansible-infra/pull/55

--- Additional comment from errata-xmlrpc on 2019-05-08 13:34:26 UTC ---

Bug report changed to ON_QA status by Errata System.
A QE request has been submitted for advisory RHEA-2019:41946-01
https://errata.devel.redhat.com/advisory/41946

--- Additional comment from bipin on 2019-05-22 11:28:24 UTC ---

Verified the bug using the below components:
===========================================
gluster-ansible-roles-1.0.5-1.el7rhgs.noarch
gluster-ansible-infra-1.0.4-1.el7rhgs.noarch

Steps:
=====
1.Start the gluster deployment
2.Once completed check the fstab entries if UUID is present

Output:
======
#
# /etc/fstab
# Created by anaconda on Wed May  8 11:52:24 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/rhvh_rhsqa-grafton7-nic2/rhvh-4.3.0.6-0.20190418.0+1 / ext4 defaults,discard 1 1
UUID=7e246924-88d8-41f4-a97e-7f70ad3aed43 /boot                   ext4    defaults        1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-home /home ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-tmp /tmp ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-var /var ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-var_log /var/log ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-var_log_audit /var/log/audit ext4 defaults,discard 1 2
/dev/mapper/rhvh_rhsqa--grafton7--nic2-swap swap                    swap    defaults        0 0
UUID=5afe8908-7ce1-4ef1-91c6-1f2cc3b7fd28 /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
UUID=3a34b40c-5a89-4c26-b846-2982c7407f04 /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0
UUID=df372c7a-58a1-4e1c-bb45-c6c0d0316de3 /gluster_bricks/data xfs inode64,noatime,nodiratime 0

--- Additional comment from SATHEESARAN on 2019-06-01 02:52:27 UTC ---

This is the change with the latest gluster-ansible, that it uses XFS UUID to mount instead of direct path.
So I consider this as the change from previous version of RHHI deployment module.

Consider this bug for release notes

--- Additional comment from Sachidananda Urs on 2019-06-01 03:56:26 UTC ---

(In reply to SATHEESARAN from comment #6)
> This is the change with the latest gluster-ansible, that it uses XFS UUID to
> mount instead of direct path.
> So I consider this as the change from previous version of RHHI deployment
> module.
> 
> Consider this bug for release notes

sas, this change is internal. User does not know about the fstab entries.
The behavior of the installation or the product does not change as far as
user is concerned. Which is the reason I think we need not mention it
in release notes.

--- Additional comment from Sunil Kumar Acharya on 2019-06-26 04:16:46 UTC ---

Please update the doc text.

--- Additional comment from Sachidananda Urs on 2019-06-26 04:19:49 UTC ---

(In reply to Sunil Kumar Acharya from comment #8)
> Please update the doc text.

As per Comment 7 this bug does not require doctext.

Comment 1 Gobinda Das 2019-09-23 08:50:16 UTC
Sachi, Can you please update doc text?

Comment 2 Gobinda Das 2019-09-23 08:53:55 UTC
Removing needinfo, as it's mentioned in dependent bug that doc text is not required.

Comment 3 SATHEESARAN 2019-09-25 07:04:51 UTC
Removing RDT flag based on comment2

Comment 5 errata-xmlrpc 2019-10-03 12:24:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2963


Note You need to log in before you can comment on or make changes to this bug.