Bug 848070 - [RHEL 6.5] Add glusterfs support to qemu
Summary: [RHEL 6.5] Add glusterfs support to qemu
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.3
Hardware: x86_64
OS: All
high
high
Target Milestone: rc
: ---
Assignee: Asias He
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 994314 (view as bug list)
Depends On: 916645 994314
Blocks: 896690 960054 840987 849796 883503 883504 956919 970435 970469 973032 979271 979274 989672 1010837 1010838 1045047
TreeView+ depends on / blocked
 
Reported: 2012-08-14 14:02 UTC by Ademar Reis
Modified: 2013-12-19 14:22 UTC (History)
27 users (show)

Fixed In Version: qemu-kvm-0.12.1.2-2.396.el6
Doc Type: Release Note
Doc Text:
Native Support for GlusterFS in QEMU Native Support for GlusterFS in QEMU allows native access to GlusterFS volumes using the libgfapi library instead of through a locally mounted FUSE file system. This native approach offers considerable performance improvements.
Clone Of:
: 849796 970435 970469 973032 989672 (view as bug list)
Environment:
Last Closed: 2013-11-21 05:50:18 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:1553 normal SHIPPED_LIVE Important: qemu-kvm security, bug fix, and enhancement update 2013-11-20 21:40:29 UTC

Description Ademar Reis 2012-08-14 14:02:26 UTC
We need to evaluate and backport the native support for glusterfs, currently being implemented by IBM.

See http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01023.html (this is v5, previous versions of the patchset include build/install/config instructions)

Comment 1 Bhavna Sarathy 2012-10-04 20:46:04 UTC
IBM posted a utube video of the GlusterFS in QEMU, worth checking out:
http://www.youtube.com/watch?v=JG3kF_djclg

Comment 7 Laura Novich 2013-06-04 03:08:05 UTC
Marking this as docs_scoped + as this bug will effect the [Virtualization  Administration Guide] for RHEL 6.5 
Note that the second round of scoping will determine the exact scope of the documentation requirement

Comment 10 Ademar Reis 2013-07-17 20:34:25 UTC
Asias: once you have a scratchbuild for RHEL6, please add a link to it here, so that QE can start running some tests.

Comment 17 Ben England 2013-08-08 19:16:52 UTC
perf-dept now tracking this bz

Comment 19 Ademar Reis 2013-08-14 01:33:32 UTC
*** Bug 994314 has been marked as a duplicate of this bug. ***

Comment 22 Bob Sibley 2013-08-14 21:27:23 UTC
Have been trying go get the latest qemu-kvm build 393 to work to test the libgfapi patch.

Current Software:
2.6.32-410.el6.x86_64

qemu-kvm-0.12.1.2-2.393.el6.x86_64
qemu-kvm-tools-0.12.1.2-2.393.el6.x86_64
qemu-guest-agent-0.12.1.2-2.393.el6.x86_64
qemu-img-0.12.1.2-2.393.el6.x86_64

glusterfs-fuse-3.4.0.18rhs-1.el6.x86_64
glusterfs-api-devel-3.4.0.18rhs-1.el6.x86_64
glusterfs-api-3.4.0.18rhs-1.el6.x86_64
glusterfs-3.4.0.18rhs-1.el6.x86_64
glusterfs-libs-3.4.0.18rhs-1.el6.x86_64
glusterfs-rdma-3.4.0.18rhs-1.el6.x86_64
glusterfs-server-3.4.0.18rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.18rhs-1.el6rhs.x86_64

cmd line:
/usr/libexec/qemu-kvm -name rhel64-3 -m 2048 -smp 2 -uuid 9625a0af-1a95-1b11-89ac-5678fa12345f -drive file=/kvm_guests/rhel64-3.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native -device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=gluster+tcp://perf38:0//perf4/tst.qcow2,if=none,id=drive-virtio-disk1,format=raw,cache=none -device virtio-blk-pci,scsi=off,drive=drive-virtio-disk1,id=virtio-disk1 -net nic,macaddr=52:54:00:a4:2f:1a,vlan=0 -net tap,script=/etc/qemu-ifup0,vlan=0,ifname=vnet0 -vnc :1 -vga cirrus 

error:
qemu-kvm: -drive file=gluster+tcp://perf38:0//perf4/tst.qcow2,if=none,id=drive-virtio-disk1,format=raw,cache=none: could not open disk image gluster+tcp://perf38:0//perf4/tst.qcow2: Operation not supported

Also

when creating an image file the format=raw is not being pre-allocated.

qemu-img create gluster://perf38:0/perf4/tst.raw 4G

image: /perf4/perftest/tst.img
file format: raw
virtual size: 4.0G (4294967296 bytes)
disk size: 0

Comment 24 Asias He 2013-08-14 23:46:55 UTC
(gdb) p whitelist_rw
$12 = {0x555555798a97 "qcow2", 0x5555557c38f8 "raw", 0x5555557ab32e "file", 0x555555798b17 "host_device", 0x555555798b23 "host_cdrom", 
    0x555555798b2e "qed", 0x555555798b32 "rbd", 0x0}

In version 393 of qemu-kvm.spec.template, we have

--block-drv-rw-whitelist=qcow2,raw,file,host_device,host_cdrom,qed,gluster,rbd \

However, In version 393 of qemu-kvm.spec, we have

 --block-drv-rw-whitelist=qcow2,raw,file,host_device,host_cdrom,qed,rbd \


Michal, can you take a look?

Comment 25 mazhang 2013-08-15 03:20:53 UTC
quick test on qemu-kvm-393, create new image with gluster was ok, but cant boot up with gluster.

[root@m2-mazhang ~]# rpm -qa |grep qemu
qemu-kvm-0.12.1.2-2.393.el6.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-img-0.12.1.2-2.393.el6.x86_64
qemu-kvm-tools-0.12.1.2-2.393.el6.x86_64
[root@m2-mazhang ~]# qemu-img create -f qcow2 gluster://gluster-server/vol/test-for-new-qemu.qcow2 10G
Formatting 'gluster://gluster-server/vol/test-for-new-qemu.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 
[root@m2-mazhang ~]# qemu-img info gluster://gluster-server/vol/test-for-new-qemu.qcow2
image: gluster://gluster-server/vol/test-for-new-qemu.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 136K
cluster_size: 65536
[root@m2-mazhang ~]# sh cmdline 
qemu-kvm: -drive file=gluster://gluster-server/vol/rhel6u5.raw,if=none,id=gfs0,cache=none,aio=native: could not open disk image gluster://gluster-server/vol/rhel6u5.raw: Operation not supported

as this problem block test , so change status to assigned.
any problem please let me know, thanks.

Comment 26 Bob Sibley 2013-08-15 16:31:53 UTC
testing https://brewweb.devel.redhat.com/taskinfo?taskID=6166306, buildArch (qemu-kvm-0.12.1.2-2.389.el6.g13.src.rpm, x86_64)



quick test I'm able to boot guest and mount disk using the qemu 389.el6.g13.


cmd line:

/usr/libexec/qemu-kvm -name rhel64-3 -m 2048 -smp 2 -cpu host -uuid 9625a0af-1a95-1b11-89ac-5678fa12345f -usbdevice tablet -monitor pty -drive file=/kvm_guests/rhel64-3.img,if=none,id=drive-virtio-disk0,index=0,format=raw,cache=none,aio=native -device virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=gluster+tcp://perf38:0//perf4/tst.img,if=none,id=drive-virtio-disk1,format=raw,cache=none -device virtio-blk-pci,scsi=off,drive=drive-virtio-disk1,id=virtio-disk1 -net nic,macaddr=52:54:00:a4:2f:1a,vlan=0 -net tap,script=/etc/qemu-ifup0,vlan=0,ifname=vnet0 -vnc :1 -k en-us &


still seeing the format=raw .img ceation not being preallocated.

Comment 27 Ademar Reis 2013-08-16 21:37:04 UTC
(In reply to Asias He from comment #24)
> (gdb) p whitelist_rw
> $12 = {0x555555798a97 "qcow2", 0x5555557c38f8 "raw", 0x5555557ab32e "file",
> 0x555555798b17 "host_device", 0x555555798b23 "host_cdrom", 
>     0x555555798b2e "qed", 0x555555798b32 "rbd", 0x0}
> 
> In version 393 of qemu-kvm.spec.template, we have
> 
> --block-drv-rw-whitelist=qcow2,raw,file,host_device,host_cdrom,qed,gluster,
> rbd \
> 
> However, In version 393 of qemu-kvm.spec, we have
> 
>  --block-drv-rw-whitelist=qcow2,raw,file,host_device,host_cdrom,qed,rbd \
> 
> 
> Michal, can you take a look?

Michal: did you merged the spec file changes by hand? You're whitelisting gluster in the wrong place in the spec file. :-(

There are two places where configure flags are enabled: one for the guest-agent build, another for qemu itself. You're whitelisting gluster only for the guest-agent.

The patch from Asias is correct, so this was introduced at merge-time.

Please submit a fix ASAP.

Comment 31 mazhang 2013-08-19 05:42:09 UTC
quick test on qemu-kvm-0.12.1.2-2.394.el6, also hit the problem in comment #27.

Starting program: /usr/libexec/qemu-kvm -M pc -cpu SandyBridge -m 4G -smp 2,sockets=1,cores=2,threads=1 -enable-kvm -name win2012 -uuid 990ea161-6b67-47b2-b803-19fb01d30d12 -smbios type=1,manufacturer=Red\ Hat,product=RHEV\ Hypervisor,version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 -k en-us -rtc base=localtime,clock=host,driftfix=slew -nodefaults -monitor stdio -qmp tcp:0:6667,server,nowait -boot menu=on -bios /usr/share/seabios/bios.bin -monitor unix:/tmp/monitor-unix,nowait,server -vga qxl -spice port=5900,disable-ticketing -drive file=gluster://gluster-server/vol/rhel6u5.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
[Thread debugging using libthread_db enabled]
qemu-kvm: -drive file=gluster://gluster-server/vol/rhel6u5.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://gluster-server/vol/rhel6u5.qcow2: Operation not supported

Comment 33 mazhang 2013-08-19 11:06:38 UTC
quick test on qemu-kvm-0.12.1.2-2.396.el6, create image and boot up guest with gluster successfully . The problem didn't happened.

[root@m2-mazhang ~]# rpm -qa |grep qemu
qemu-img-0.12.1.2-2.396.el6.x86_64
qemu-kvm-0.12.1.2-2.396.el6.x86_64
qemu-kvm-debuginfo-0.12.1.2-2.396.el6.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-kvm-tools-0.12.1.2-2.396.el6.x86_64

Comment 35 Gianluca Cecchi 2013-09-03 07:17:22 UTC
Hello,
some questions:
- are there any links for downloading and so testing these packages with upcoming oVirt 3.3 and its gluster support?
- do they imply that all the sw stack is at the future 6.5 one (now in alpha?) or can I simply install them on a current 6.4 system?

Thanks,
Gianluca

Comment 39 errata-xmlrpc 2013-11-21 05:50:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1553.html


Note You need to log in before you can comment on or make changes to this bug.