Bug 1667209 - [gluster-ansible] fstab entries are updated even for bricks on non-VDO volumes
Summary: [gluster-ansible] fstab entries are updated even for bricks on non-VDO volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHHI-V 1.6
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1667208
Blocks: RHHI-V-1-6-Engineering-Backlog-BZs
TreeView+ depends on / blocked
 
Reported: 2019-01-17 18:08 UTC by SATHEESARAN
Modified: 2019-05-09 06:09 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, when a deployment used virtual disk optimization (VDO), options related to VDO devices were added to non-VDO disks in the /etc/fstab file. This meant that disks failed to mount after a reboot. This has been corrected so that devices are configured correctly and all disks mount correctly after reboot.
Clone Of: 1667208
Environment:
Last Closed: 2019-05-09 06:09:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:1121 0 None None None 2019-05-09 06:09:21 UTC

Description SATHEESARAN 2019-01-17 18:08:51 UTC
+++ This bug was initially created as a clone of Bug #1667208 +++

Description of problem:
------------------------
XFS filesystems ( gluster bricks ) created on the VDO volumes, requires special mount options to facilitate it to get mounted post VDO service is started.

But the fstab entries need not be updated for the gluster bricks on non-VDO volume


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gluster-ansible-roles-1.0.4


How reproducible:
------------------
Always


Steps to Reproduce:
-------------------
1. Create a brick on non-VDO & VDO device using gdeploy conf file
2. Check for the fstab entry for the gluster bricks created on the non-VDO volume.

Actual results:
---------------
fstab entry for the gluster bricks residing on non-VDO has the updated mount options added 


Expected results:
------------------
fstab entry for the gluster bricks residing on the VDO volume only should have updated mount options


Additional info:
----------------

1. lsblk output:
-----------------
sdb                                                                 8:16   0   931G  0 disk 
└─gluster_vg_sdb-gluster_lv_engine                                253:10   0   100G  0 lvm  

sdc                                                                 8:32   0  18.2T  0 disk 
└─vdo_sdc                                                         253:20   0   160T  0 vdo  
  ├─gluster_vg_sdc-gluster_lv_data                                253:21   0    12T  0 lvm  /gluster_bricks/data
  └─gluster_vg_sdc-gluster_lv_vmstore                             253:22   0     4T  0 lvm  /gluster_bricks/vmstore

gluster_lv_engine is created on 'sdb', whereas the gluster_lv_data & gluster_lv_vmstore are created on VDO volume /dev/mapper/vdo_sdc

2. Look for the fstab entries
------------------------------
/dev/gluster_vg_sdb/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,x-systemd.requires=vdo.service 0 0


Note that from above /dev/gluster_vg_sdb/gluster_lv_engine also has 'x-systemd.requires=vdo.service", although this LV is not on VDO volume

Comment 3 SATHEESARAN 2019-02-23 11:45:46 UTC
Tested with gluster-ansible-infra-1.0.3

fstab entries for the filesystem on VDO volume only has the relevant VDO options.

Observed the following in /etc/fstab file with the mix of VDO & non-VDO:

<snip>
/dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdb/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0
</snip>

Comment 5 errata-xmlrpc 2019-05-09 06:09:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1121


Note You need to log in before you can comment on or make changes to this bug.