Description of problem: Restarting VDSMD (either manually or automatically due to a crash) unmount and remounts gluster mounted volumes, causing any running VMs on those volumes to pause due to IO errors. Those VMs are unable to be restarted. Version-Release number of selected component (if applicable): Ovirt 3.5.0 initial release -> nightlies on 12/10/14, specifically vdsm-4.16.7-* -> vdsm-4.16.8-6.gitc240f5c.el7.x86_64 Components from latest test: vdsm-xmlrpc-4.16.8-6.gitc240f5c.el7.noarch vdsm-4.16.8-6.gitc240f5c.el7.x86_64 vdsm-python-4.16.8-6.gitc240f5c.el7.noarch vdsm-yajsonrpc-4.16.8-6.gitc240f5c.el7.noarch vdsm-cli-4.16.8-6.gitc240f5c.el7.noarch vdsm-python-zombiereaper-4.16.8-6.gitc240f5c.el7.noarch vdsm-jsonrpc-4.16.8-6.gitc240f5c.el7.noarch glusterfs-api-3.5.2-1.el7.x86_64 glusterfs-fuse-3.5.2-1.el7.x86_64 glusterfs-3.5.2-1.el7.x86_64 glusterfs-rdma-3.5.2-1.el7.x86_64 glusterfs-cli-3.5.2-1.el7.x86_64 glusterfs-libs-3.5.2-1.el7.x86_64 This only occurs on Centos 7 hosts, on a Centos 6 host with the same component levels, the VDSMs do not pause and continue running as expected. How reproducible: Always on centos 7 hosts. Steps to Reproduce: 1. run some vms on a centos 7 ovirt 3.5.x host node 2. restart vddmd from the command line 3. Actual results: vms pause due to IO errors and can not be restarted Expected results: restarting vddmd has no effect on running vms Additional info: Appears similar to https://bugzilla.redhat.com/show_bug.cgi?id=1162640, I have uploaded logs from an event there. Can provide more if needed. On a recent test, I was using a host that was slow enough restarting vdsmd that I could see that the gluster volume had been unmounted. Have not tested with a pure nfs mount.
Please try to reproduce with vdsm-4.16.8, which has bug 1162640 fixed. Then, please share your {super,}vdsm.log after the vdsmd restart? glusterfs logs may come up useful, too.
Reproduction above was with vdsm-4.16.8-6.gitc240f5c.el7.x86_64. Will upload logs from this test for you.
Created attachment 967447 [details] log files from vm pause
That's probably because QEMU doesn't reopen the file descriptors after they got invalidated. See my comments here: https://bugzilla.redhat.com/show_bug.cgi?id=1058300
Fixed in Gerrit: https://gerrit.ovirt.org/#/c/40239/ https://gerrit.ovirt.org/#/c/40240/ Tested on CentOS 7
Merged and working fine in 3.6 Alpha. Can be closed.
Sorry, the patches seems not to be present in 3.6 alpha branch, only in master. Please include in alpha-2, since killing the storage is too dangerous.
(In reply to Christopher Pereira from comment #7) > Sorry, the patches seems not to be present in 3.6 alpha branch, only in > master. > Please include in alpha-2, since killing the storage is too dangerous. Since the patches are merged, this bug should be in MODIFIED. It will be included in the next upstream official build, and should already be available in the nightly builds (for the last month or so).
I just tested alpha-2 and the patches are now included.
Moving to ON_QA so this can be formally verified
Tested on 3.6-rc1 on CentOS 7. Patches verified in production for some months. Can be easily verified by checking that glusterd service is NOT running inside VDSM group. The main issue was: https://bugzilla.redhat.com/show_bug.cgi?id=1201355#c7
Bug tickets that are moved to testing must have target release set to make sure tester knows what to test. Please set the correct target release before moving to ON_QA.
Yanvi, what info do you need?
See comment #12
(In reply to Yaniv Dary from comment #14) The fix exists since 4.17.1, setting target release to 4.17.8 since no other version is available.
Tested with RHEV 3.6.3.3 and RHGS 3.1.2 RC by adding RHGS node to 3.5 compatible cluster. 1. Lauched the VM with its disk image on gluster storage domain 2. Restarted the vdsm ( vdsm-4.17.20-0.1.el7ev.noarch ) App VM was not running uninterrupted