Bug 1022961
Summary: | Gluster: running a VM from a gluster domain should use gluster URI instead of a fuse mount | |||
---|---|---|---|---|
Product: | [oVirt] vdsm | Reporter: | Aharon Canan <acanan> | |
Component: | General | Assignee: | Denis Chaplygin <dchaplyg> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | --- | CC: | acanan, acrow, ahino, amureini, avi.miller, bao, bazulay, bmcclain, bugs, c.affolter, danken, dougsland, giuseppe.ragusa, herrold, howey.vernon, info, jcall, josh, lpeer, lveyde, lyarwood, mike, mkalinin, nbarcet, perfbz, rcyriac, ricardo.arguello, rs, sabose, sankarshan, sasundar, sbonazzo, scohen, s.kieske, srevivo, trichard, v.astafiev, vbellur, ybronhei, ylavi | |
Target Milestone: | ovirt-4.1.5 | Keywords: | Improvement | |
Target Release: | 4.19.27 | Flags: | ylavi:
ovirt-4.1+
rule-engine: exception+ ylavi: requirements_defined? ylavi: planning_ack+ sabose: devel_ack+ sasundar: testing_ack+ |
|
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Enhancement | ||
Doc Text: |
This release adds libgfapi support to the Manager and VDSM. libgfapi provides virtual machines with faster access to their images, stored on a Gluster volume, compared to a fuse interface. With the 'LibgfApi' data center feature enabled, or 'lubgfapi_supported' cluster-level feature enabled, virtual machines will access their images, stored on Gluster volumes, directly via libgfapi.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1488995 (view as bug list) | Environment: | ||
Last Closed: | 2017-08-23 08:05:26 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1032370, 1181466, 1247521, 1247933, 1397870 | |||
Bug Blocks: | 1173669, 1177771, 1201355, 1225425, 1322852, 1411323, 1488995 |
Comment 3
Aharon Canan
2013-10-27 11:59:37 UTC
The comment was made in the ovirt weekly meeting today: 10:54 < abaron> mburns: note that problem is not libgfapi per se but rather lack of support in libvirt for snapshots over libgfapi but I do not see a Depends or link to a RFE in libvirt for it ... does one exist? Looks like the root cause for gluster storage domain disappearing is that the mount process is executed by vdsm with the same CGroup so the restart of vdsm cause systemd to kill also the mount process. If you can drop the mount it will solve also bug #1201355 Dropping dependency on 1017289 which has been closed wontfix. VDSM won't provide 3.6 support on EL6 so no reason to be blocked by an EL6 wontfix bug while the EL7 has been closed current release. Moving 1017289 to see also, just for reference. *** Bug 1177776 has been marked as a duplicate of this bug. *** should this be on MODIFIED? Is there anything left to do from the RHEV side? (In reply to Yaniv Dary from comment #19) > should this be on MODIFIED? > Is there anything left to do from the RHEV side? Yes - to write code that makes the VM use the URI, test it and merge it :-) Is it possible to get some clarifications? As far as I understand, will be implemented in 4.0.0 Currently, gluster URI is not able to be used by vdsm, disk always be accessed via mount point, not as network block device. Practically, it means that FUSE overhead still affects VMs. Why, in that case, oVirt feature page says feature is implemented? Just gives instructions, how to avoid https://bugzilla.redhat.com/show_bug.cgi?id=1181466 oVirt page: http://www.ovirt.org/Features/GlusterFS_Storage_Domain May be, it is possible to run VMs in recommended way, but there is some workaround how to run VM with GlusterFS URI (with only one gluster volume to avoid https://bugzilla.redhat.com/show_bug.cgi?id=1247521 )? Is it true? If yes, is it possible to provide instruction for that to be tried? (In reply to Andrejs Baulins from comment #21) > Is it possible to get some clarifications? > > As far as I understand, will be implemented in 4.0.0 > Currently, gluster URI is not able to be used by vdsm, disk always be > accessed via mount point, not as network block device. > Practically, it means that FUSE overhead still affects VMs. > > Why, in that case, oVirt feature page says feature is implemented? > Just gives instructions, how to avoid > https://bugzilla.redhat.com/show_bug.cgi?id=1181466 > > oVirt page: http://www.ovirt.org/Features/GlusterFS_Storage_Domain > > May be, it is possible to run VMs in recommended way, but there is some > workaround how to run VM with GlusterFS URI (with only one gluster volume to > avoid https://bugzilla.redhat.com/show_bug.cgi?id=1247521 )? > > Is it true? > If yes, is it possible to provide instruction for that to be tried? I believe the feature page is out of date. This feature was implemented in 3.3, but reverted before its GA as it broke snapshots. In any event, it is not available in the current oVirt release. >
> In any event, it is not available in the current oVirt release.
Is it planned for future ?
Using libgfapi ( QEMU native driver for GlusterFS ) has lots of performance improvements over fuse mounted gluster volume.
(In reply to SATHEESARAN from comment #23) > > > > In any event, it is not available in the current oVirt release. > > Is it planned for future ? > Using libgfapi ( QEMU native driver for GlusterFS ) has lots of performance > improvements over fuse mounted gluster volume. This is why this ticket exists. To track this change. I believe, these 2 issues must be linked: https://bugzilla.redhat.com/show_bug.cgi?id=1175800 with status "depends" or "related" Updated wiki page with current status and main open issues found. Reader can follow dependent issues himself in order to know current status. The storage team do not perform any work on HC and the priority of this item will be very low outside the HC use case. We support the Gluster team in their work (review patches and design reviews, for example) and looking to get more req to perform additional work on HC from our side. Currently, however (and unfortunately), nothing is planned from our side. Moving this back. *** Bug 1175800 has been marked as a duplicate of this bug. *** Reassigning release based on Comment 7 of Bug 1175800, and unblocking on the qemu multiple host requirement, as what we have in functionality would be similar to fuse access now. Opened a new RFE (Bug 1322852) to add additional host support in gfapi access. (In reply to Yaniv Dary from comment #27) > The storage team do not perform any work on HC and the priority of this item > will be very low outside the HC use case. We support the Gluster team in > their work (review patches and design reviews, for example) and looking to > get more req to perform additional work on HC from our side. Currently, > however (and unfortunately), nothing is planned from our side. Moving this > back. I was not trying to do an HC setup when I ran into this issue. It also affects non-HC setups that just happen to have a GlusterFS storage domain. Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA. oVirt 4.0 beta has been released, moving to RC milestone. oVirt 4.0 beta has been released, moving to RC milestone. Can anyone tell me why this still has [HC] in the title? The issue affects ovirt installations using a completely separate glusterfs cluster too. GlusterFS requirements for use as on oVirt storage domain need to be made much clearer IMHO, eg is replica 3 really required, replica 2 with arbiter, erasure coding. and so on. Coming into this as a newbie would be very confusing. Can you please update the bugs with the bug url and cleanup the patch list in this bug? Tested with vdsm-4.19.28-1.el7ev.x86_64 when the engine is configured to use 'libgfapi', the VM disks are accessed via gfapi access mechanism and it works good. |