Bug 1548393 - XFS file systems created on VDO volumes are not coming up after reboot
Summary: XFS file systems created on VDO volumes are not coming up after reboot
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
: RHHI-V 1.5
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1548399
Blocks: 1520836
TreeView+ depends on / blocked
 
Reported: 2018-02-23 11:55 UTC by bipin
Modified: 2018-11-08 05:39 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1548399 (view as bug list)
Environment:
Last Closed: 2018-11-08 05:38:31 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:3523 0 None None None 2018-11-08 05:39:12 UTC

Description bipin 2018-02-23 11:55:11 UTC
Description of problem:

Deployed HE RHV 4.2 with ansible on RHEL-7.5 with VDO volumes created. After rebooting one of the host could see the host going down. After logging to the mm console could see no device mapper path for the gluster volumes created. After commenting the mount path under /etc/fstab and rebooting could see the host coming up with the dev mapper path .


Version-Release number of selected component (if applicable):
kmod-kvdo-6.1.0.146-13.el7.x86_64
vdo-6.1.0.146-16.x86_64
glusterfs-3.12.2-4.el7rhgs.x86_64
glusterfs-fuse-3.12.2-4.el7rhgs.x86_64
rhvm-appliance-4.2-20180202.0.el7.noarch
rhv-release-4.2.2-1-001.noarch
gdeploy-2.0.2-22.el7rhgs.noarch
cockpit-ovirt-dashboard-0.11.11-0.1.el7ev.noarch

How reproducible:


Steps to Reproduce:
1.Deploy hostedengine  RHV 4.2 on RHEL 7.5
2.Rebooted one of the 3 POD host ( my case non SPM host)
3.See the host going down

Actual results:
The host goes down after a reboot

Expected results:
The host should come up after a reboot 


Additional info:

Comment 1 SATHEESARAN 2018-03-02 14:54:45 UTC
The actual problem here is when the server with VDO volume is rebooted, systemd tries to mount the kernel filesystems and thus tries to mount gluster XFS bricks in /etc/fstab.

But these filesystems are not yet available as VDO volume is not yet started, which leads the boot to fail and thus dropped in to maintenance shell.

Here is VDO systemd config file.
# cat /usr/lib/systemd/system/vdo.service 
[Unit]
Description=VDO volume services
After=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml

[Install]
WantedBy=multi-user.target

So VDO service is loaded after mounting kernel filesystem service

Comment 2 SATHEESARAN 2018-03-02 15:01:21 UTC
The simple fix that could be made is to add param in XFS entries corresponding to gluster bricks, to have parameter 'x-systemd.requires=vdo.service'

Entry in fstab becomes like this:

'/dev/mapper/vg1-lv1 /home/bricks xfs defaults,x-systemd.requires=vdo.service 0 0'

Comment 3 SATHEESARAN 2018-05-02 10:22:18 UTC
Tested with gdeploy-2.0.2-26, and still find the same problem.

Checked with Ramky with this, and he is working on to fix the issue.

Comment 8 Guillaume Pavese 2018-09-04 19:29:09 UTC
My solution to have this work on ovirt 4.2.6 was adding the following to fstab mount options : 

"x-systemd.requires=vdo.service,x-systemd.device-timeout=30,_netdev"

With these parametters the volumes are always mounted on host reboot.

Comment 10 errata-xmlrpc 2018-11-08 05:38:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3523


Note You need to log in before you can comment on or make changes to this bug.