Bug 1654584 - Update mount options for bricks created on vdo volume
Summary: Update mount options for bricks created on vdo volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.4.z Batch Update 2
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1630788
TreeView+ depends on / blocked
 
Reported: 2018-11-29 07:25 UTC by Sahina Bose
Modified: 2019-01-15 10:51 UTC (History)
20 users (show)

Fixed In Version: gdeploy-2.0.2-31
Doc Type: Bug Fix
Doc Text:
Previously, VDO service failed to start due to mishandling by systemd. On rebooting a host added to a hyperconverged environment, the host is marked non-responsive in the user interface as VDSM services failed to start. This update adds additional mount options which leads to a successful mount and VDSM services are started without any failure.
Clone Of: 1630788
Environment:
Last Closed: 2018-12-17 17:07:34 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3830 0 None None None 2018-12-17 17:07:39 UTC

Description Sahina Bose 2018-11-29 07:25:20 UTC
+++ This bug was initially created as a clone of Bug #1630788 +++

Description of problem:

While upgrading host from ovirt-4.2.2 to ovirt-4.2.3, the upgrade fails due to failure in starting ovirt-imageio-daemon

Error from daemon.log
2018-05-09 20:04:25,812 INFO    (MainThread) [server] Starting (pid=30963, version=1.3.1.2)
2018-05-09 20:04:25,815 ERROR   (MainThread) [server] Service failed (image_server=<ovirt_imageio_daemon.server.ThreadedWSGIServer instance at 0x7ff2b952e998>, ticket_server=None, running=True)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py", line 60, in main
    start(config)
  File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py", line 101, in start
    UnixWSGIRequestHandler)
  File "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__
    self.server_bind()
  File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/uhttp.py", line 72, in server_bind
    self.socket.bind(self.server_address)
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory


Version-Release number of selected component (if applicable):


How reproducible:
Faced this on 1 of 3 hosts

Steps to Reproduce:
Upgrade host from engine UI


> 
> Denis Keefe has come up with the workaround in
> https://bugzilla.redhat.com/show_bug.cgi?id=1639667#c18. I have tested it
> and it worked. After reboot, there were /var/run/vdsm directory was intact.
> 

We need to update the fstab with below entry in additional options for bricks using vdo volumes:
_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service

Comment 2 Sachidananda Urs 2018-11-29 13:26:24 UTC
Commit: https://github.com/gluster/gdeploy/commit/720050b25b

Comment 6 SATHEESARAN 2018-12-07 11:13:13 UTC
Tested with gdeploy-2.0.2-31.el7rhgs

Additional mount options (_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service) are updated with /etc/fstab for XFS filesystems ( gluster bricks ) created on top of VDO volumes

<snip>
/dev/gluster_vg_sdb/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0
/dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
/dev/gluster_vg_sdd/gluster_lv_newvol /gluster_bricks/newvol xfs inode64,noatime,nodiratime 0 0

</snip>

Comment 10 errata-xmlrpc 2018-12-17 17:07:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3830


Note You need to log in before you can comment on or make changes to this bug.