RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 996831 - qemu-kvm segmentation fault while boot guest from glusterfs with wrong host name
Summary: qemu-kvm segmentation fault while boot guest from glusterfs with wrong host name
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Ademar Reis
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-14 05:45 UTC by mazhang
Modified: 2016-09-20 04:39 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 996829
: 998778 (view as bug list)
Environment:
Last Closed: 2014-06-13 10:26:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description mazhang 2013-08-14 05:45:50 UTC
+++ This bug was initially created as a clone of Bug #996829 +++

Description of problem:
Actually , this problem from a mistake. Just wanna access image with host name from gluster server , but forgot add host name and ip address into /etc/hosts. then try boot up guest will segmentation fault.


Version-Release number of selected component (if applicable):

host: RHEL-7.0-20130628.0
qemu-kvm-1.5.2-3.el7.x86_64

gluster server: RHS-2.1-20130806.n.2
glusterfs-server-3.4.0.17rhs-1.el6rhs.x86_64


How reproducible:
100%


Steps to Reproduce:
1. [root@m-qz ~]# ping gluster-server
ping: unknown host gluster-server

2. boot up guest with:
...
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:39:13:2c \
-drive file=gluster://gluster-server/vol/rhel6u5.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads \
-device virtio-blk-pci,scsi=off,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \ 


Actual results:
qemu-kvm segmentation fault.

Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7fffeaa26700 (LWP 19877)]
[New Thread 0x7fffea225700 (LWP 19878)]
[New Thread 0x7fffe8f49700 (LWP 19879)]
[New Thread 0x7fffe8122700 (LWP 19880)]
qemu-kvm: -drive file=gluster://gluster-server/vol/rhel6u5.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads: Gluster connection failed for server=gluster-server port=0 volume=vol image=rhel6u5.qcow2 transport=tcp

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff5c475c8 in glfs_lseek () from /lib64/libgfapi.so.0
(gdb) bt
#0  0x00007ffff5c475c8 in glfs_lseek () from /lib64/libgfapi.so.0
#1  0x00005555555e5788 in qemu_gluster_getlength ()
#2  0x00005555555da700 in bdrv_open_common ()
#3  0x00005555555df7fa in bdrv_file_open ()
#4  0x00005555555df9e5 in bdrv_open ()
#5  0x000055555560e24e in drive_init ()
#6  0x000055555572d0ab in drive_init_func ()
#7  0x000055555585b96b in qemu_opts_foreach ()
#8  0x00005555555c401a in main ()


Expected results:
quit with warning.

Additional info:
1. add ip address and host name in to "/etc/hosts", will works well.
2. rhel6u5 hit this problem, but backtrace seems like different.

Comment 2 Asias He 2013-08-14 07:03:20 UTC
Reproduced. This affects upstream qemu as well.

Comment 3 Asias He 2013-08-14 07:12:06 UTC
In block/gluster.c

qemu_gluster_init {
   ret = glfs_init(glfs);
    if (ret) {
        ...
        goto out;
    }
    return glfs;

out:
    if (glfs) {
        old_errno = errno;
        glfs_fini(glfs);
        errno = old_errno;
    }
    return NULL;
}

glfs_init does not setup errno. So even if glfs_init fails errno is set to 0.

In block/gluster.c, we will return 0 when qemu_gluster_init fails.
qemu_gluster_open {
    s->glfs = qemu_gluster_init(gconf, filename);
    if (!s->glfs) {
        ret = -errno;
        goto out;
    }
}

-------------------------------------------
/*
  SYNOPSIS

  glfs_init: Initialize the 'virtual mount'

  DESCRIPTION

  This function initializes the glfs_t object. This consists of many steps:
  - Spawn a poll-loop thread.
  - Establish connection to management daemon and receive volume specification.
  - Construct translator graph and initialize graph.
  - Wait for initialization (connecting to all bricks) to complete.

  PARAMETERS

  @fs: The 'virtual mount' object to be initialized.

  RETURN VALUES

   0 : Success.
  -1 : Failure. @errno will be set with the type of failure.
*/

Comment 4 Ben Turner 2013-09-25 18:28:06 UTC
Patch posted in:

https://bugzilla.redhat.com/show_bug.cgi?id=998778#c3

Comment 5 Ademar Reis 2013-10-09 02:17:14 UTC
Asias believes no code changes are needed now that the glusterfs bug has been fixed. Please test.

Comment 7 mazhang 2014-01-22 06:43:49 UTC
(In reply to Ademar Reis from comment #5)
> Asias believes no code changes are needed now that the glusterfs bug has
> been fixed. Please test.

Glusterfs bug has been fixed, qemu-kvm no code changes.

Installed the latest package and test this bug.

Host:
qemu-kvm-tools-1.5.3-39.el7.x86_64
ipxe-roms-qemu-20130517-1.gitc4bce43.el7.noarch
qemu-kvm-common-1.5.3-39.el7.x86_64
qemu-kvm-1.5.3-39.el7.x86_64
qemu-kvm-debuginfo-1.5.3-39.el7.x86_64
qemu-img-1.5.3-39.el7.x86_64
glusterfs-api-3.4.0.51rhs-1.el7.x86_64
glusterfs-libs-3.4.0.51rhs-1.el7.x86_64
glusterfs-fuse-3.4.0.51rhs-1.el7.x86_64
glusterfs-3.4.0.51rhs-1.el7.x86_64

Steps:
Start qemu-kvm with:
-drive file=gluster://gluster/gv0/rhel7-64-bak.raw,if=none,id=drive-ide0-0-1,format=raw,cache=none,aio=threads \
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-0-1,id=ide0-0-1,bootindex=0 \


Result:
Qemu-kvm quit with warning, no crash found.

qemu-kvm: -drive file=gluster://gluster/gv0/rhel7-64-bak.raw,if=none,id=drive-ide0-0-1,format=raw,cache=none,aio=threads: Gluster connection failed for server=gluster port=0 volume=gv0 image=rhel7-64-bak.raw transport=tcp
qemu-kvm: -drive file=gluster://gluster/gv0/rhel7-64-bak.raw,if=none,id=drive-ide0-0-1,format=raw,cache=none,aio=threads: could not open disk image gluster://gluster/gv0/rhel7-64-bak.raw: Could not open 'glusterfs:data_pair_t': Transport endpoint is not connected

This bug has been fixed.

Comment 9 Ludek Smid 2014-06-13 10:26:10 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.