+++ This bug was initially created as a clone of Bug #1630788 +++ Description of problem: While upgrading host from ovirt-4.2.2 to ovirt-4.2.3, the upgrade fails due to failure in starting ovirt-imageio-daemon Error from daemon.log 2018-05-09 20:04:25,812 INFO (MainThread) [server] Starting (pid=30963, version=1.3.1.2) 2018-05-09 20:04:25,815 ERROR (MainThread) [server] Service failed (image_server=<ovirt_imageio_daemon.server.ThreadedWSGIServer instance at 0x7ff2b952e998>, ticket_server=None, running=True) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py", line 60, in main start(config) File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py", line 101, in start UnixWSGIRequestHandler) File "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__ self.server_bind() File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/uhttp.py", line 72, in server_bind self.socket.bind(self.server_address) File "/usr/lib64/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) error: [Errno 2] No such file or directory Version-Release number of selected component (if applicable): How reproducible: Faced this on 1 of 3 hosts Steps to Reproduce: Upgrade host from engine UI > > Denis Keefe has come up with the workaround in > https://bugzilla.redhat.com/show_bug.cgi?id=1639667#c18. I have tested it > and it worked. After reboot, there were /var/run/vdsm directory was intact. > We need to update the fstab with below entry in additional options for bricks using vdo volumes: _netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service
Commit: https://github.com/gluster/gdeploy/commit/720050b25b
Tested with gdeploy-2.0.2-31.el7rhgs Additional mount options (_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service) are updated with /etc/fstab for XFS filesystems ( gluster bricks ) created on top of VDO volumes <snip> /dev/gluster_vg_sdb/gluster_lv_engine /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0 /dev/gluster_vg_sdc/gluster_lv_data /gluster_bricks/data xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdc/gluster_lv_vmstore /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0 /dev/gluster_vg_sdd/gluster_lv_newvol /gluster_bricks/newvol xfs inode64,noatime,nodiratime 0 0 </snip>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3830