Bug 875871
Summary: | enable QEMU nbd block driver | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Paolo Bonzini <pbonzini> |
Component: | qemu-kvm | Assignee: | Miroslav Rezanina <mrezanin> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.0 | CC: | areis, ari.lemmke, berrange, juzhang, knoel, mrezanin, pbonzini, qzhang, rjones, rvokal, shu, sluo, virt-maint, xutian |
Target Milestone: | rc | Keywords: | FutureFeature |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-1.5.0-2.el7 | Doc Type: | Enhancement |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-06-13 11:27:35 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Paolo Bonzini
2012-11-12 17:54:32 UTC
FYI from a libvirt POV we will be needing the NBD driver in the future to support use of qcow2 disk images with LXC guests. Upstream LKML have indicated that they consider NBD to be the FUSE equivalent for the block layer, so qemu-nbd + NBD kmod are the only way for use to get QCow2 support for LXC, unless someone fancies extending the Loopback driver to support qcow2. OpenStack already uses NBD kernel driver + qemu-img for support LXC + qcow2 disk. Though we don't intend to officially support LXC with OpenStack in RHEL, it would still be useful if the kmod existed, so people can use RHEL as a viable dev platform for OpenStack. > Paolo, there's no qemu configure option to enable nbd. There is: --block-drv-whitelist=qcow2,raw,file,host_device,host_cdrom,qed we need to add "nbd" here. > However, you mentioned it's better not to enable qemu-nbd so I guess we > shouldn't do anything about this yet. I think we're enabling qemu-nbd in RHEL6, so we'll have to enable it in RHEL7. Separate problem. Fixed in qemu-kvm-1.5.0-2.el7 Verified on qemu-kvm-rhev-1.5.3-39.el7.x86_64: qemu-nbd -t -p 10000 RHEL-Server-7.0-64-virtio.raw ><fs> run 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00 ><fs> list-filesystems /dev/sda1: xfs /dev/rhel/root: xfs /dev/rhel/swap: swap ><fs> mount /dev/rhel/root / ><fs> ls / bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var ><fs> ls / bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var ><fs> cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.0 Beta (Maipo) Besides comment31, KVM QE ran several run storage vm migration by using nbd. This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request. RHEL6 is THE current release till 2020-11-30 (until EOL). RHEL6 != RHEL7. You should remove the possibility to close RHEL6 bugs/feature request by using RHEL7 as "current release". If unimplementable then use "no can do". //arl (Ari Lemmke) (In reply to Ari Lemmke from comment #35) > RHEL6 is THE current release till 2020-11-30 (until EOL). > RHEL6 != RHEL7. Red Hat supports multiple RHEL versions at the same time. See https://access.redhat.com/support/policy/updates/errata/ for detailed description of of RHEL Life Cycle and list of actively supported versions. > You should remove the possibility to close RHEL6 bugs/feature request by > using RHEL7 as "current release". If unimplementable then use "no can do". Note that this bug was originally filed for RHEL 7.0 (see history). Because of it was resolved in RHEL 7.0 (see comment #31), the bug was CLOSED CURRENTRELEASE. If there is any issue with this feature contact your support representative, please. |