qemu and related tools (qemu-img) hang when using libgfapi from glusterfs-3.7.12.
For eg., running the following qemu-img command against a single brick glusterfs-3.7.12 volume, causes the qemu-img command to hang,
# qemu-img create -f qcow2 gluster://localhost/testvol/testimg.qcow2 10G
With qemu-img at least the hangs happen when creating qcow2 images. The command doesn't hang when creating raw images.
When creating a qcow2 image, the qemu-img appears to be reloading the glusterfs graph several times. This can be observed in the attached log where qemu-img is run against glusterfs-3.7.11.
With glusterfs-3.7.12, this doesn't happen as an early writev failure happens on the brick transport with a EFAULT (Bad address) errno (see attached log). No further actions happen after this, and the qemu-img command hangs till the RPC ping-timeout happens and then fails.
Investigation is still on to find out the cause for this error.
This issue was originally reported in the gluster-users mailing list by Lindsay Mathieson, Kevin Lemonnier and Dmitry Melekhov. 
Created attachment 1175883 [details]
qemu-img create libgfapi 3.7.11 log
Created attachment 1175888 [details]
qemu-img create libgfapi 3.7.12
Created attachment 1175955 [details]
tcpdump captured while creating a qcow2 image
Created attachment 1175956 [details]
tcpdump captured while creating a raw image
The image is actually created, even tough this error was reported:
qemu-img: gluster://localhost/vms/qcow2.img: Could not resize image: Input/output error
[root@vm017 ~]# qemu-img info gluster://localhost/vms/qcow2.img
file format: qcow2
virtual size: 0 (0 bytes)
disk size: 193K
Format specific information:
lazy refcounts: false
[root@vm017 ~]# qemu-img info gluster://localhost/vms/raw.img
file format: raw
virtual size: 32M (33554432 bytes)
disk size: 4.0K
There are no errors in the tcpdump that I could spot in a glance.
Created attachment 1175960 [details]
qemu-img running under ltrace (passed due to race condition?)
Created attachment 1175962 [details]
qemu-img running under ltrace (failed due to race condition?)
Debugged this along with Raghavendra Talur and Kaushal M. turns out this is caused by http://review.gluster.org/#/c/14148/ .
pub_glfs_pwritev_async(..., iovec, iovec_count...) can take array of iovecs as input and another parameter count that indicates the number of iovecs passed. gfapi internally collates all the iovecs into a single iovec and sends it all the way to the RPC(network layer), as a result of collating all the iovecs, the count of iovecs should also be passed as '1', but the patch was sending the count as sent by the user. i.e. if user specified 3 iovecs, and count is 3, gfapi copies all iovecs into one and should send the count as 1, but it is currently sending as 3, and hence the issue.
The fix for the same will be sent, and will try to include it in 3.7.13.
REVIEW: http://review.gluster.org/14854 (gfapi: update count when glfs_buf_copy is used) posted (#1) for review on master by Raghavendra Talur (firstname.lastname@example.org)
REVIEW: http://review.gluster.org/14859 (gfapi: update count when glfs_buf_copy is used) posted (#1) for review on release-3.7 by Poornima G (email@example.com)
COMMIT: http://review.gluster.org/14859 committed in release-3.7 by Kaushal M (firstname.lastname@example.org)
Author: Raghavendra Talur <email@example.com>
Date: Mon Jul 4 18:36:26 2016 +0530
gfapi: update count when glfs_buf_copy is used
Backport of http://review.gluster.org/#/c/14854
glfs_buf_copy collates all iovecs into a iovec with count=1. If
gio->count is not updated it will lead to dereferencing of invalid
Signed-off-by: Raghavendra Talur <firstname.lastname@example.org>
Signed-off-by: Poornima G <email@example.com>
Reported-By: Lindsay Mathieson <firstname.lastname@example.org>
Reported-By: Dmitry Melekhov <email@example.com>
Reported-By: Tom Emerson <TEmerson@cyberitas.com>
Smoke: Gluster Build System <firstname.lastname@example.org>
Reviewed-by: Prashanth Pai <email@example.com>
NetBSD-regression: NetBSD Build System <firstname.lastname@example.org>
CentOS-regression: Gluster Build System <email@example.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.13, please open a new bug report.
glusterfs-3.7.13 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.