Bug 1352632

Summary: qemu libgfapi clients hang when doing I/O
Product: [Community] GlusterFS Reporter: Raghavendra Talur <rtalur>
Component: libgfapiAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact: Sudhir D <sdharane>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.8.0CC: bugs, kaushal, lindsay.mathieson, ndevos, pgurusid, rtalur, sdharane
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8.1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1352482 Environment:
Last Closed: 2016-07-08 14:42:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1352634    
Bug Blocks: 1345943    

Description Raghavendra Talur 2016-07-04 13:23:08 UTC
+++ This bug was initially created as a clone of Bug #1352482 +++

qemu and related tools (qemu-img) hang when using libgfapi from glusterfs-3.7.12.

For eg., running the following qemu-img command against a single brick glusterfs-3.7.12 volume, causes the qemu-img command to hang,

# qemu-img create -f qcow2 gluster://localhost/testvol/testimg.qcow2 10G

With qemu-img at least the hangs happen when creating qcow2 images. The command doesn't hang when creating raw images.

When creating a qcow2 image, the qemu-img appears to be reloading the glusterfs graph several times. This can be observed in the attached log where qemu-img is run against glusterfs-3.7.11.

With glusterfs-3.7.12, this doesn't happen as an early writev failure happens on the brick transport with a EFAULT (Bad address) errno (see attached log). No further actions happen after this, and the qemu-img command hangs till the RPC ping-timeout happens and then fails.

Investigation is still on to find out the cause for this error.

This issue was originally reported in the gluster-users mailing list by Lindsay Mathieson, Kevin Lemonnier and Dmitry Melekhov. [1][2][3]

[1] https://www.gluster.org/pipermail/gluster-users/2016-June/027144.html
[2] https://www.gluster.org/pipermail/gluster-users/2016-June/027186.html
[3] https://www.gluster.org/pipermail/gluster-users/2016-July/027218.html

--- Additional comment from Kaushal on 2016-07-04 14:56 IST ---



--- Additional comment from Kaushal on 2016-07-04 14:58 IST ---



--- Additional comment from Niels de Vos on 2016-07-04 16:28 IST ---



--- Additional comment from Niels de Vos on 2016-07-04 16:28 IST ---



--- Additional comment from Niels de Vos on 2016-07-04 16:36:23 IST ---

The image is actually created, even tough this error was reported:

qemu-img: gluster://localhost/vms/qcow2.img: Could not resize image: Input/output error

[root@vm017 ~]# qemu-img info gluster://localhost/vms/qcow2.img 
image: gluster://localhost/vms/qcow2.img
file format: qcow2
virtual size: 0 (0 bytes)
disk size: 193K
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false


[root@vm017 ~]# qemu-img info gluster://localhost/vms/raw.img 
image: gluster://localhost/vms/raw.img
file format: raw
virtual size: 32M (33554432 bytes)
disk size: 4.0K


There are no errors in the tcpdump that I could spot in a glance.

--- Additional comment from Niels de Vos on 2016-07-04 17:03 IST ---



--- Additional comment from Niels de Vos on 2016-07-04 17:04 IST ---



--- Additional comment from Poornima G on 2016-07-04 18:25:09 IST ---

RCA:

Debugged this along with Raghavendra Talur and Kaushal M. turns out this is caused by http://review.gluster.org/#/c/14148/ .

pub_glfs_pwritev_async(..., iovec, iovec_count...) can take array of iovecs as input and another parameter count that indicates the number of iovecs passed. gfapi internally collates all the iovecs into a single iovec and sends it all the way to the RPC(network layer), as a result of collating all the iovecs, the count of iovecs should also be passed as '1', but the patch was sending the count as sent by the user. i.e. if user specified 3 iovecs, and count is 3, gfapi copies all iovecs into one and should send the count as 1, but it is currently sending as 3, and hence the issue.

The fix for the same will be sent, and will try to include it in 3.7.13.

Regards,
Poornima

--- Additional comment from Vijay Bellur on 2016-07-04 18:39:11 IST ---

REVIEW: http://review.gluster.org/14854 (gfapi: update count when glfs_buf_copy is used) posted (#1) for review on master by Raghavendra Talur (rtalur)

Comment 1 Vijay Bellur 2016-07-05 05:48:25 UTC
REVIEW: http://review.gluster.org/14858 (gfapi: update count when glfs_buf_copy is used) posted (#1) for review on release-3.8 by Poornima G (pgurusid)

Comment 2 Vijay Bellur 2016-07-05 05:54:09 UTC
REVIEW: http://review.gluster.org/14858 (gfapi: update count when glfs_buf_copy is used) posted (#2) for review on release-3.8 by Poornima G (pgurusid)

Comment 3 Vijay Bellur 2016-07-07 08:54:44 UTC
COMMIT: http://review.gluster.org/14858 committed in release-3.8 by Niels de Vos (ndevos) 
------
commit 01fed48b5b096bc22069003190377f45cca2176f
Author: Raghavendra Talur <rtalur>
Date:   Mon Jul 4 18:36:26 2016 +0530

    gfapi: update count when glfs_buf_copy is used
    
    Backport of http://review.gluster.org/#/c/14854
    
    glfs_buf_copy collates all iovecs into a iovec with count=1. If
    gio->count is not updated it will lead to dereferencing of invalid
    address.
    
    Change-Id: I7c58071d5c6515ec6fee3ab36af206fa80cf37c3
    BUG: 1352632
    Signed-off-by: Raghavendra Talur <rtalur>
    Signed-off-by: Poornima G <pgurusid>
    Reported-By: Lindsay Mathieson <lindsay.mathieson>
    Reported-By: Dmitry Melekhov <dm>
    Reported-By: Tom Emerson <TEmerson>
    Reviewed-on: http://review.gluster.org/14858
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Prashanth Pai <ppai>
    CentOS-regression: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Niels de Vos <ndevos>

Comment 4 Niels de Vos 2016-07-08 14:42:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.1, please open a new bug report.

glusterfs-3.8.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/156
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user