RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1210169 - Libvirtd will crash while start a guest with gluster storage via network protocol
Summary: Libvirtd will crash while start a guest with gluster storage via network prot...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: glusterfs
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-09 05:58 UTC by zhenfeng wang
Modified: 2018-11-19 03:59 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 03:59:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The libvirtd crash coredump info (11.12 KB, text/plain)
2015-04-09 05:59 UTC, zhenfeng wang
no flags Details

Description zhenfeng wang 2015-04-09 05:58:07 UTC
Description of problem:
Libvirtd will crash while start a guest with gluster storage via network protocol. It works well with  glusterfs-3.6.0.48-1.el7rhs, however, didn't work well with the later, even with the latest one  glusterfs-3.6.0.53-1.el7rhs in the gluster client

Version-Release number of selected component (if applicable):
gluster client
kernel-3.10.0-236.el7.x86_64
glusterfs-3.6.0.53-1.el7rhs.x86_64
selinux-policy-3.13.1-24.el7.noarch
qemu-kvm-rhev-2.2.0-8.el7.x86_64
libvirt-1.2.14-1.el7.x86_64

gluster server:
glusterfs-server-3.6.0.53-1.el6rhs.x86_64

How reproducible:
100%

Steps to Reproduce:
In gluster client:
1.# getsebool virt_use_fusefs
virt_use_fusefs --> on

# getenforce
Enforcing

2.Prepare a gluster server and image in the gluster server
# qemu-img info gluster://$server_ip/gluster-vol1/rh6.img
image: gluster://$server_ip/gluster-vol1/rh6.img
file format: raw
virtual size: 6.0G (6442450944 bytes)
disk size: 0


3.Prepare a guest with the following xml 

#virsh dumpxml rhel7
--
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='gluster' name='gluster-vol1/rh6.img'>
        <host name='$server_ip' port='24007'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>


4.start the guest, the libvirtd will crash
# ps aux|grep libvirtd
root     17162  0.0  0.2 1009596 18064 ?       Ssl  17:41   0:00 /usr/sbin/libvirtd

# virsh start rhel7
error: Failed to start domain rhel7
error: End of file while reading data: Input/output error

# ps aux|grep libvirtd
root     19876  0.4  0.2 802900 22632 ?        Ssl  17:44   0:00 /usr/sbin/libvirtd

5.check the libvirtd coredump info
please check the attachment

6.it works well with glusterfs-3.6.0.48-1.el7rhs in the gluster client and didn't work well with the later, even with the latest one. BTW, it works well with rhel6.7 after try several times with the latest glusterfs-3.6.0.53-1.el6rhs.x86_64

Actual results:
libvirtd crash

Expected results:
shouldn't crash

Comment 1 zhenfeng wang 2015-04-09 05:59:09 UTC
Created attachment 1012515 [details]
The libvirtd crash coredump info

Comment 3 Shanzhi Yu 2015-04-09 06:24:27 UTC
I got such backtraces which seems very similar with bug 1210137
see https://bugzilla.redhat.com/show_bug.cgi?id=1210137#c2

# rpm -qf /usr/include/glusterfs/list.h
glusterfs-devel-3.6.0.53-1.el7.x86_64


Thread 10 (Thread 0x7fa5996d4700 (LWP 15775)):
#0  0x00007fa58e75b295 in list_add (head=0x7fa57c0013a8, new=0x7fa57c237398) at list.h:33
#1  mem_pool_new_fn (sizeof_type=sizeof_type@entry=144, count=count@entry=4096, 
    name=name@entry=0x7fa58e9c8f19 "call_frame_t") at mem-pool.c:345
#2  0x00007fa58e9bada4 in glusterfs_ctx_defaults_init (ctx=0x7fa57c242d50) at glfs.c:105
#3  glfs_new (volname=0x7fa5841e3a00 "gluster-vol1") at glfs.c:535
#4  0x00007fa59392900e in virStorageFileBackendGlusterInit (src=0x7fa5841c08b0)
    at storage/storage_backend_gluster.c:611
#5  0x00007fa593917862 in virStorageFileInitAs (src=src@entry=0x7fa5841c08b0, uid=uid@entry=107, gid=gid@entry=107)
    at storage/storage_driver.c:2567
#6  0x00007fa593917e23 in virStorageFileGetMetadataRecurse (src=src@entry=0x7fa5841c08b0, 
    parent=parent@entry=0x7fa5841c08b0, uid=uid@entry=107, gid=gid@entry=107, allow_probe=allow_probe@entry=false, 
    report_broken=report_broken@entry=true, cycle=cycle@entry=0x7fa57c249650) at storage/storage_driver.c:2827
---Type <return> to continue, or q <return> to quit---
#7  0x00007fa59391830f in virStorageFileGetMetadata (src=0x7fa5841c08b0, uid=107, gid=107, allow_probe=false, 
    report_broken=report_broken@entry=true) at storage/storage_driver.c:2950
#8  0x00007fa58cc49475 in qemuDomainDetermineDiskChain (driver=driver@entry=0x7fa5840de4e0, 
    vm=vm@entry=0x7fa5841ceb90, disk=disk@entry=0x7fa5841bb520, force_probe=force_probe@entry=true, 
    report_broken=report_broken@entry=true) at qemu/qemu_domain.c:2805
#9  0x00007fa58cc4957e in qemuDomainCheckDiskPresence (driver=driver@entry=0x7fa5840de4e0, 
    vm=vm@entry=0x7fa5841ceb90, cold_boot=cold_boot@entry=true) at qemu/qemu_domain.c:2624
#10 0x00007fa58cc636c6 in qemuProcessStart (conn=conn@entry=0x7fa578000a70, driver=driver@entry=0x7fa5840de4e0, 
    vm=vm@entry=0x7fa5841ceb90, asyncJob=asyncJob@entry=0, migrateFrom=migrateFrom@entry=0x0, 
    stdin_fd=stdin_fd@entry=-1, stdin_path=stdin_path@entry=0x0, snapshot=snapshot@entry=0x0, 
    vmop=vmop@entry=VIR_NETDEV_VPORT_PROFILE_OP_CREATE, flags=flags@entry=1) at qemu/qemu_process.c:4610
#11 0x00007fa58ccbdce2 in qemuDomainObjStart (conn=0x7fa578000a70, driver=driver@entry=0x7fa5840de4e0, 
    vm=0x7fa5841ceb90, flags=flags@entry=0) at qemu/qemu_driver.c:7287
#12 0x00007fa58ccbe626 in qemuDomainCreateWithFlags (dom=0x7fa57c23d0c0, flags=0) at qemu/qemu_driver.c:7342
#13 0x00007fa5a8a7c77c in virDomainCreate (domain=domain@entry=0x7fa57c23d0c0) at libvirt-domain.c:6838
#14 0x00007fa5a9521d0b in remoteDispatchDomainCreate (server=0x7fa5aacc9a80, msg=0x7fa5aace09e0, 
    args=<optimized out>, rerr=0x7fa5996d3c70, client=0x7fa5aace0a50) at remote_dispatch.h:3481
#15 remoteDispatchDomainCreateHelper (server=0x7fa5aacc9a80, client=0x7fa5aace0a50, msg=0x7fa5aace09e0, 
    rerr=0x7fa5996d3c70, args=<optimized out>, ret=0x7fa57c000a70) at remote_dispatch.h:3457
#16 0x00007fa5a8ae3152 in virNetServerProgramDispatchCall (msg=0x7fa5aace09e0, client=0x7fa5aace0a50, 
    server=0x7fa5aacc9a80, prog=0x7fa5aacddbd0) at rpc/virnetserverprogram.c:437
#17 virNetServerProgramDispatch (prog=0x7fa5aacddbd0, server=server@entry=0x7fa5aacc9a80, client=0x7fa5aace0a50, 
    msg=0x7fa5aace09e0) at rpc/virnetserverprogram.c:307
#18 0x00007fa5a952fefd in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, 
    srv=0x7fa5aacc9a80) at rpc/virnetserver.c:172
#19 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7fa5aacc9a80) at rpc/virnetserver.c:193
#20 0x00007fa5a89df615 in virThreadPoolWorker (opaque=opaque@entry=0x7fa5aacbe150) at util/virthreadpool.c:145
#21 0x00007fa5a89deb38 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#22 0x00007fa5a5e60df5 in start_thread () from /lib64/libpthread.so.0
#23 0x00007fa5a5b8e1ad in clone () from /lib64/libc.so.6

Comment 7 Poornima G 2018-11-19 03:59:44 UTC
Lot of changes have gone in, in this part of code, especially the memory corruption fixes in the glfs_init and glfs_fini(). And the gluster version is 3.6 which is very old, i think there are no such failures with the latest versions. Hence closing this bug, please re-open if it occurs in the latest version.


Note You need to log in before you can comment on or make changes to this bug.