Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
+++ This bug was initially created as a clone of Bug #996829 +++
Description of problem:
Actually , this problem from a mistake. Just wanna access image with host name from gluster server , but forgot add host name and ip address into /etc/hosts. then try boot up guest will segmentation fault.
Version-Release number of selected component (if applicable):
host: RHEL-7.0-20130628.0
qemu-kvm-1.5.2-3.el7.x86_64
gluster server: RHS-2.1-20130806.n.2
glusterfs-server-3.4.0.17rhs-1.el6rhs.x86_64
How reproducible:
100%
Steps to Reproduce:
1. [root@m-qz ~]# ping gluster-server
ping: unknown host gluster-server
2. boot up guest with:
...
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:39:13:2c \
-drive file=gluster://gluster-server/vol/rhel6u5.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads \
-device virtio-blk-pci,scsi=off,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
Actual results:
qemu-kvm segmentation fault.
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7fffeaa26700 (LWP 19877)]
[New Thread 0x7fffea225700 (LWP 19878)]
[New Thread 0x7fffe8f49700 (LWP 19879)]
[New Thread 0x7fffe8122700 (LWP 19880)]
qemu-kvm: -drive file=gluster://gluster-server/vol/rhel6u5.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads: Gluster connection failed for server=gluster-server port=0 volume=vol image=rhel6u5.qcow2 transport=tcp
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff5c475c8 in glfs_lseek () from /lib64/libgfapi.so.0
(gdb) bt
#0 0x00007ffff5c475c8 in glfs_lseek () from /lib64/libgfapi.so.0
#1 0x00005555555e5788 in qemu_gluster_getlength ()
#2 0x00005555555da700 in bdrv_open_common ()
#3 0x00005555555df7fa in bdrv_file_open ()
#4 0x00005555555df9e5 in bdrv_open ()
#5 0x000055555560e24e in drive_init ()
#6 0x000055555572d0ab in drive_init_func ()
#7 0x000055555585b96b in qemu_opts_foreach ()
#8 0x00005555555c401a in main ()
Expected results:
quit with warning.
Additional info:
1. add ip address and host name in to "/etc/hosts", will works well.
2. rhel6u5 hit this problem, but backtrace seems like different.
In block/gluster.c
qemu_gluster_init {
ret = glfs_init(glfs);
if (ret) {
...
goto out;
}
return glfs;
out:
if (glfs) {
old_errno = errno;
glfs_fini(glfs);
errno = old_errno;
}
return NULL;
}
glfs_init does not setup errno. So even if glfs_init fails errno is set to 0.
In block/gluster.c, we will return 0 when qemu_gluster_init fails.
qemu_gluster_open {
s->glfs = qemu_gluster_init(gconf, filename);
if (!s->glfs) {
ret = -errno;
goto out;
}
}
-------------------------------------------
/*
SYNOPSIS
glfs_init: Initialize the 'virtual mount'
DESCRIPTION
This function initializes the glfs_t object. This consists of many steps:
- Spawn a poll-loop thread.
- Establish connection to management daemon and receive volume specification.
- Construct translator graph and initialize graph.
- Wait for initialization (connecting to all bricks) to complete.
PARAMETERS
@fs: The 'virtual mount' object to be initialized.
RETURN VALUES
0 : Success.
-1 : Failure. @errno will be set with the type of failure.
*/
(In reply to Ademar Reis from comment #5)
> Asias believes no code changes are needed now that the glusterfs bug has
> been fixed. Please test.
Glusterfs bug has been fixed, qemu-kvm no code changes.
Installed the latest package and test this bug.
Host:
qemu-kvm-tools-1.5.3-39.el7.x86_64
ipxe-roms-qemu-20130517-1.gitc4bce43.el7.noarch
qemu-kvm-common-1.5.3-39.el7.x86_64
qemu-kvm-1.5.3-39.el7.x86_64
qemu-kvm-debuginfo-1.5.3-39.el7.x86_64
qemu-img-1.5.3-39.el7.x86_64
glusterfs-api-3.4.0.51rhs-1.el7.x86_64
glusterfs-libs-3.4.0.51rhs-1.el7.x86_64
glusterfs-fuse-3.4.0.51rhs-1.el7.x86_64
glusterfs-3.4.0.51rhs-1.el7.x86_64
Steps:
Start qemu-kvm with:
-drive file=gluster://gluster/gv0/rhel7-64-bak.raw,if=none,id=drive-ide0-0-1,format=raw,cache=none,aio=threads \
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-0-1,id=ide0-0-1,bootindex=0 \
Result:
Qemu-kvm quit with warning, no crash found.
qemu-kvm: -drive file=gluster://gluster/gv0/rhel7-64-bak.raw,if=none,id=drive-ide0-0-1,format=raw,cache=none,aio=threads: Gluster connection failed for server=gluster port=0 volume=gv0 image=rhel7-64-bak.raw transport=tcp
qemu-kvm: -drive file=gluster://gluster/gv0/rhel7-64-bak.raw,if=none,id=drive-ide0-0-1,format=raw,cache=none,aio=threads: could not open disk image gluster://gluster/gv0/rhel7-64-bak.raw: Could not open 'glusterfs:data_pair_t': Transport endpoint is not connected
This bug has been fixed.
This request was resolved in Red Hat Enterprise Linux 7.0.
Contact your manager or support representative in case you have further questions about the request.