Bug 1649507 - fstab entries are updated even for bricks on non-VDO volumes
Summary: fstab entries are updated even for bricks on non-VDO volumes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHHI-V 1.5.z Async
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1649509
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-13 18:34 UTC by SATHEESARAN
Modified: 2019-05-20 04:54 UTC (History)
4 users (show)

Fixed In Version: gdeploy-2.0.2-31
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1649509 (view as bug list)
Environment:
Last Closed: 2019-05-20 04:54:45 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2018-11-13 18:34:02 UTC
Description of problem:
------------------------
XFS filesystems ( gluster bricks ) created on the VDO volumes, requires special mount options to facilitate it to get mounted post VDO service is started.

But the fstab entries need not be updated for the gluster bricks on non-VDO volume


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gdeploy-2.0.2-30.el7rhgs


How reproducible:
------------------
Always


Steps to Reproduce:
-------------------
1. Create a brick on non-VDO & VDO device using gdeploy conf file
2. Check for the fstab entry for the gluster bricks created on the non-VDO volume.

Actual results:
---------------
fstab entry for the gluster bricks residing on non-VDO has the updated mount options added 


Expected results:
------------------
fstab entry for the gluster bricks residing on the VDO volume only should have updated mount options


Additional info:
----------------

1. lsblk output:
-----------------
sdb                                                                 8:16   0   931G  0 disk 
└─gluster_vg_sdb-gluster_lv_engine                                253:10   0   100G  0 lvm  

sdc                                                                 8:32   0  18.2T  0 disk 
└─vdo_sdc                                                         253:20   0   160T  0 vdo  
  ├─gluster_vg_sdc-gluster_lv_data                                253:21   0    12T  0 lvm  /gluster_bricks/data
  └─gluster_vg_sdc-gluster_lv_vmstore                             253:22   0     4T  0 lvm  /gluster_bricks/vmstore

gluster_lv_engine is created on 'sdb', whereas the gluster_lv_data & gluster_lv_vmstore are created on VDO volume /dev/mapper/vdo_sdc

2. Look for the fstab entries
------------------------------
/dev/gluster_vg_sdb/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0


Note that from above /dev/gluster_vg_sdb/gluster_lv_engine also has 'x-systemd.requires=vdo.service", although this LV is not on VDO volume

Comment 1 Sachidananda Urs 2018-11-19 15:00:15 UTC
PR: https://github.com/gluster/gluster-ansible-infra/pull/41

Comment 2 Gobinda Das 2018-11-28 05:47:21 UTC
As devel_ack already set to + , so clearing needinfo

Comment 3 SATHEESARAN 2018-12-07 11:45:35 UTC
The dependent bug is moved to ON_QA

Comment 4 SATHEESARAN 2018-12-07 11:45:59 UTC
Tested with gdeploy-2.0.2-31.el7rhgs

Additional mount options (_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service) are updated with /etc/fstab for XFS filesystems ( gluster bricks ) created on top of VDO volumes

The other XFS filesystems created on disks ( not on VDO volumes ) doesn't have VDO special options
<snip>
/dev/gluster_vg_sdb/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0
/dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdd/gluster_lv_newvol /gluster_bricks/newvol xfs inode64,noatime,nodiratime 0 0

</snip>


Note You need to log in before you can comment on or make changes to this bug.