RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 994314 - Segmentation fault in __inode_retire
Summary: Segmentation fault in __inode_retire
Keywords:
Status: CLOSED DUPLICATE of bug 848070
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.5
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Asias He
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 848070
TreeView+ depends on / blocked
 
Reported: 2013-08-07 03:15 UTC by Asias He
Modified: 2013-08-14 01:33 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-14 01:33:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Asias He 2013-08-07 03:15:49 UTC
Description of problem:

qemu-kvm with libgfapi support got Segmentation fault in __inode_retire.


(gdb) bt
#0  __inode_retire (inode=0x7fffe4068bc0) at inode.c:376
#1  0x00007ffff50de94c in inode_table_prune (table=0x7fffe4068bb0) at inode.c:1264
#2  0x00007ffff50ded2c in inode_unref (inode=0x7fffe8ce402c) at inode.c:444
#3  0x00007ffff50ca9a2 in loc_wipe (loc=0x7fffffffa960) at xlator.c:568
#4  0x00007ffff6ea3bb9 in glfs_resolve_base (fs=<value optimized out>, subvol=0x7fffe400ec40, inode=0x7fffe8ce402c, iatt=
    0x7fffffffaa30) at glfs-resolve.c:212
#5  0x00007ffff6ea4180 in glfs_resolve_at (fs=0x555555f0b160, subvol=0x7fffe400ec40, at=<value optimized out>, origpath=
    0x555555eeaf40 "qcow2.img", loc=0x7fffffffac40, iatt=0x7fffffffabd0, follow=1, reval=0) at glfs-resolve.c:340
#6  0x00007ffff6ea54bb in glfs_resolve_path (fs=0x555555f0b160, subvol=0x7fffe400ec40, origpath=0x555555eeaf40 "qcow2.img", loc=
    0x7fffffffac40, iatt=0x7fffffffabd0, follow=<value optimized out>, reval=0) at glfs-resolve.c:454
#7  0x00007ffff6ea5543 in glfs_resolve (fs=<value optimized out>, subvol=<value optimized out>, origpath=<value optimized out>, 
    loc=<value optimized out>, iatt=<value optimized out>, reval=<value optimized out>) at glfs-resolve.c:469
#8  0x00007ffff6ea2d7e in glfs_open (fs=0x555555f0b160, path=0x555555eeaf40 "qcow2.img", flags=0) at glfs-fops.c:96
#9  0x000055555562065c in qemu_gluster_open (bs=<value optimized out>, filename=0x555555eea3d0 "gluster://rhs1/vol/qcow2.img", 
    bdrv_flags=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/block/gluster.c:309
#10 0x00005555555fb3df in bdrv_open_common (bs=0x555555eea400, filename=0x555555eea3d0 "gluster://rhs1/vol/qcow2.img", 
    flags=<value optimized out>, drv=0x555555ac1800) at /usr/src/debug/qemu-kvm-0.12.1.2/block.c:602
#11 0x00005555555fb5cb in bdrv_file_open (pbs=0x7fffffffad98, filename=0x555555eea3d0 "gluster://rhs1/vol/qcow2.img", flags=0)
    at /usr/src/debug/qemu-kvm-0.12.1.2/block.c:653
#12 0x00005555555fba37 in find_image_format (bs=0x555555ee99f0, filename=0x555555eea3d0 "gluster://rhs1/vol/qcow2.img", flags=98, drv=
    0x0) at /usr/src/debug/qemu-kvm-0.12.1.2/block.c:467
#13 bdrv_open (bs=0x555555ee99f0, filename=0x555555eea3d0 "gluster://rhs1/vol/qcow2.img", flags=98, drv=0x0)
    at /usr/src/debug/qemu-kvm-0.12.1.2/block.c:731
#14 0x0000555555626b12 in drive_open (dinfo=0x555555ee9950) at /usr/src/debug/qemu-kvm-0.12.1.2/blockdev.c:282
#15 0x000055555562753b in drive_init (opts=<value optimized out>, default_to_scsi=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/blockdev.c:690
#16 0x00005555555ba63b in drive_init_func (opts=<value optimized out>, opaque=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:2217
#17 0x00005555555f22ba in qemu_opts_foreach (list=<value optimized out>, func=0x5555555ba630 <drive_init_func>, opaque=0x555555ad9400, 
    abort_on_failure=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-option.c:1035
#18 0x00005555555bfc72 in main (argc=27, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6332


Version-Release number of selected component (if applicable):

rpm -qa|grep gluster
glusterfs-debuginfo-3.4.0.15rhs-6.el6.x86_64
glusterfs-api-devel-3.4.0.15rhs-6.el6.x86_64
glusterfs-libs-3.4.0.15rhs-6.el6.x86_64
glusterfs-3.4.0.15rhs-6.el6.x86_64
glusterfs-api-3.4.0.15rhs-6.el6.x86_64
glusterfs-devel-3.4.0.15rhs-6.el6.x86_64


How reproducible:

100%

Steps to Reproduce:

gdb --args /usr/libexec/qemu-kvm -L /usr/share/qemu-kvm/ \
-nographic -vnc :10 -enable-kvm -m 2048 -smp 4 -cpu qemu64,+x2apic -M pc \
-netdev tap,id=hn0,vhost=on -device virtio-net-pci,netdev=hn0 \
-drive file=$OS,if=none,id=os -device virtio-blk-pci,drive=os,bootindex=1 \
-drive file=gluster://rhs1/vol/qcow2.img,if=none,id=gfs0,cache=none -device virtio-blk-pci,drive=gfs0


Actual results:

Segmentation fault

Expected results:

Boot ok, guest sees the gluster volume.

Additional info:

Comment 4 Anand Avati 2013-08-08 04:17:45 UTC
This bug is a combination of bad code in qemu and badly packaged qemu. The core of the issue is that qemu has a version of uuid_is_null() in block/vdi.c which is buggy (fixed upstream at 4f3669ea5bd73ade0dce5f1155cb9ad9788fd54c). This definition of uuid_is_null() returns false positives as it only checked for the first 8 bytes of the uuid to be 0s (and wrongly decided glusterfs's root gfid to be NULL as only the 15th byte is a 1, eventually causing it to wrongly retire). And this code is "enabled" in only if libuuid is not available in the system.

So to fix the issue, we need to do any one (preferably both) of:

- backport upstream commit 4f3669ea5bd73ade0dce5f1155cb9ad9788fd54c

- install libuuid-devel in the build environment and recompile qemu

Comment 5 Asias He 2013-08-08 06:50:32 UTC
(In reply to Anand Avati from comment #4)
> This bug is a combination of bad code in qemu and badly packaged qemu. The
> core of the issue is that qemu has a version of uuid_is_null() in
> block/vdi.c which is buggy (fixed upstream at
> 4f3669ea5bd73ade0dce5f1155cb9ad9788fd54c). This definition of uuid_is_null()
> returns false positives as it only checked for the first 8 bytes of the uuid
> to be 0s (and wrongly decided glusterfs's root gfid to be NULL as only the
> 15th byte is a 1, eventually causing it to wrongly retire). And this code is
> "enabled" in only if libuuid is not available in the system.
> 
> So to fix the issue, we need to do any one (preferably both) of:
> 
> - backport upstream commit 4f3669ea5bd73ade0dce5f1155cb9ad9788fd54c

Initial test shows with upstream commit 4f3669ea5bd73ade0dce5f1155cb9ad9788fd54c, no segfault is observed any more.

Thanks Anand!

> - install libuuid-devel in the build environment and recompile qemu

Comment 7 Ademar Reis 2013-08-14 01:33:32 UTC
This will be handled in the gluster support bug. Marking it as a dupe.

*** This bug has been marked as a duplicate of bug 848070 ***


Note You need to log in before you can comment on or make changes to this bug.