Bug 1850830 - VDO mount options are added for the bricks created on top of LUKS devices
Summary: VDO mount options are added for the bricks created on top of LUKS devices
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.5.z Async Update
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1850693
TreeView+ depends on / blocked
 
Reported: 2020-06-25 01:18 UTC by SATHEESARAN
Modified: 2020-07-23 06:46 UTC (History)
7 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-11.el8rhgs.noarch.rpm
Doc Type: No Doc Update
Doc Text:
Clone Of: 1850693
Environment:
rhhiv
Last Closed: 2020-07-23 06:46:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:3121 0 None None None 2020-07-23 06:46:42 UTC

Description SATHEESARAN 2020-06-25 01:18:19 UTC
Description of problem:
-----------------------
When creating the bricks on the top of LUKS device, those devices have mount options similar to VDO devices added in /etc/fstab


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gluster-ansible-infra-1.0.4-10.el8rhgs
RHVH-4.4.1

How reproducible:
------------------
Always

Steps to Reproduce:
---------------------
1. Provide the LUKS device for bricks for engine, vmstore, data
2. Enable VDO only on one brick
3. Check /etc/fstab mount options for the bricks for all the bricks

Actual results:
---------------
All the bricks contains the mount option relevant for VDO volumes

Expected results:
------------------
Only the bricks created on top of VDO volumes should have the relevant VDO options

Comment 1 SATHEESARAN 2020-06-25 05:46:31 UTC
Even when data and vmstore are the only bricks created on top of VDO volume, the other bricks
engine and testvol still have mount options relevant for VDO


UUID=eaf61294-73f2-4c27-8622-48149a98b8eb /gluster_bricks/engine xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
UUID=b99d1bae-c289-4a5b-bff3-7dd066b29f45 /gluster_bricks/data xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
UUID=c6810c7f-7015-414d-aea9-ad991852aa2f /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
UUID=c274902f-65e7-4fde-a1c5-ab5cf381bd5f /gluster_bricks/testvol xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0

Comment 2 SATHEESARAN 2020-06-25 06:15:57 UTC
Upstream patch[1] is already posted

[1] - https://github.com/gluster/gluster-ansible-infra/pull/102

Comment 3 SATHEESARAN 2020-06-30 11:04:36 UTC
the fix is included in the build - gluster-ansible-infra-1.0.4-11.el8rhgs.noarch.rpm

Comment 6 SATHEESARAN 2020-07-10 11:07:50 UTC
Tested with gluster-ansible-infra-1.0.4-11.el8rhgs

When LUKS devices are used as disks, the fstab mount options no longer reflect the VDO mount options

UUID=eaf61294-73f2-4c27-8622-48149a98b8eb /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0
UUID=b99d1bae-c289-4a5b-bff3-7dd066b29f45 /gluster_bricks/data xfs inode64,noatime,nodiratime 0 0
UUID=c6810c7f-7015-414d-aea9-ad991852aa2f /gluster_bricks/vmstore xfs inode64,noatime,nodiratime 0 0
UUID=c274902f-65e7-4fde-a1c5-ab5cf381bd5f /gluster_bricks/testvol xfs inode64,noatime,nodiratime 0 0

Comment 8 errata-xmlrpc 2020-07-23 06:46:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3121


Note You need to log in before you can comment on or make changes to this bug.