Bug 1204589 - qemu-kvm crash when create image on glusterfs
Summary: qemu-kvm crash when create image on glusterfs
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: glusterfs
Version: 6.7
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Poornima G
QA Contact: SATHEESARAN
URL:
Whiteboard:
Keywords: Regression, TestBlocker
Depends On:
Blocks: 1192402 1211656 1215137
TreeView+ depends on / blocked
 
Reported: 2015-03-23 05:58 UTC by ShupingCui
Modified: 2015-07-22 07:19 UTC (History)
23 users (show)

(edit)
* Previously, the qemu-kvm utility could terminate unexpectedly with a segmentation fault after the user attempted to create an image on GlusterFS using the "qemu-img create" command. The glusterfs packages source code has been modified to fix this bug, and qemu-kvm no longer crashes in the described situation. (BZ#1204589)
Clone Of:
: 1211656 (view as bug list)
(edit)
Last Closed: 2015-07-22 07:19:00 UTC


Attachments (Terms of Use)
gdb info (5.91 KB, text/plain)
2015-03-23 05:58 UTC, ShupingCui
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0683 normal SHIPPED_LIVE glusterfs bug fix update 2015-07-20 17:58:21 UTC

Description ShupingCui 2015-03-23 05:58:14 UTC
Created attachment 1005201 [details]
gdb info

Description of problem:
qemu-kvm crash when create image on glusterfs

Version-Release number of selected component (if applicable):
Host:
# uname -r
2.6.32-545.el6.x86_64
# rpm -q qemu-kvm-rhev
qemu-kvm-rhev-0.12.1.2-2.458.el6.x86_64
# rpm -q qemu-img-rhev
qemu-img-rhev-0.12.1.2-2.458.el6.x86_64
# rpm -q glusterfs
glusterfs-3.6.0.53-1.el6.x86_64

Glusterfs server:
glusterfs-3.6.0.53-1.el6rhs.x86_64


How reproducible:
100%

Steps to Reproduce:
1. create image on glusterfs
# qemu-img create -f qcow2 gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2 20G

2.
3.

Actual results:
Formatting 'gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2', fmt=qcow2 size=21474836480 encryption=off cluster_size=65536 
Segmentation fault (core dumped)


Expected results:
no core dumped

Additional info:

Program: /usr/bin/qemu-img
PID: 18950
Signal: 11
Hostname: hp-z220-02.qe.lab.eng.nay.redhat.com
Time of the crash (according to kernel): Mon Mar 23 13:48:37 2015
Program backtrace:
[New Thread 18950]
[New Thread 18967]
[New Thread 18968]
[Thread debugging using libthread_db enabled]
Core was generated by `/usr/bin/qemu-img create -f qcow2 gluster://gluster-virt-qe-01.qe.lab.eng.nay.r'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f7bde7c35c6 in list_add (sizeof_type=144, count=<value optimized out>, name=<value optimized out>) at list.h:33
33		new->next->prev = new;
#0  0x00007f7bde7c35c6 in list_add (sizeof_type=144, count=<value optimized out>, name=<value optimized out>) at list.h:33
No locals.
#1  mem_pool_new_fn (sizeof_type=144, count=<value optimized out>, name=<value optimized out>) at mem-pool.c:345
        mem_pool = 0x7f7be1f02e00
        padded_sizeof_type = 172
        pool = <value optimized out>
        i = <value optimized out>
        ret = <value optimized out>
        list = <value optimized out>
        ctx = <value optimized out>
        __FUNCTION__ = "mem_pool_new_fn"
#2  0x00007f7be06bece4 in glusterfs_ctx_defaults_init (volname=0x7f7be1f02a60 "distdata01") at glfs.c:105
        pool = 0x7f7be1ed3d20
        ret = -1
#3  glfs_new (volname=0x7f7be1f02a60 "distdata01") at glfs.c:535
        fs = 0x0
        ret = <value optimized out>
        ctx = 0x7f7be1ed3950
#4  0x00007f7be189352e in qemu_gluster_init (gconf=0x7f7be1ed3840, filename=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/block/gluster.c:199
        glfs = 0x0
        ret = <value optimized out>
        old_errno = <value optimized out>
#5  0x00007f7be18939c1 in qemu_gluster_open (bs=<value optimized out>, filename=0x7fff09f6d070 "gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2", bdrv_flags=2) at /usr/src/debug/qemu-kvm-0.12.1.2/block/gluster.c:312
        s = 0x7f7be1f0db30
        open_flags = 0
        ret = 0
        gconf = 0x7f7be1ed3840
#6  0x00007f7be1867aa6 in bdrv_open_common (bs=0x7f7be1ef1040, filename=0x7fff09f6d070 "gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2", flags=<value optimized out>, drv=0x7f7be1ac94c0) at /usr/src/debug/qemu-kvm-0.12.1.2/block.c:665
        ret = <value optimized out>
        open_flags = 2
        __PRETTY_FUNCTION__ = "bdrv_open_common"
#7  0x00007f7be1867c4b in bdrv_file_open (pbs=0x7fff09f6ad50, filename=0x7fff09f6d070 "gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2", flags=2) at /usr/src/debug/qemu-kvm-0.12.1.2/block.c:716
        bs = 0x7f7be1ef1040
        drv = 0x7f7be1ac94c0
        ret = <value optimized out>
#8  0x00007f7be1882161 in qcow2_create2 (filename=0x7fff09f6d070 "gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2", total_size=41943040, backing_file=0x0, backing_format=0x0, flags=0, cluster_size=65536, prealloc=PREALLOC_MODE_OFF) at /usr/src/debug/qemu-kvm-0.12.1.2/block/qcow2.c:1134
        cluster_bits = 16
        bs = <value optimized out>
        header = {magic = 0, version = 0, backing_file_offset = 0, backing_file_size = 0, cluster_bits = 0, size = 0, crypt_method = 272, l1_size = 0, l1_table_offset = 219043332111, refcount_table_offset = 532575944795, refcount_table_clusters = 119, nb_snapshots = 110, snapshots_offset = 140170041147392}
        refcount_table = <value optimized out>
        ret = 0
        options = 0x0
        file_drv = <value optimized out>
        drv = <value optimized out>
        __PRETTY_FUNCTION__ = "qcow2_create2"
#9  0x00007f7be1882842 in qcow2_create (filename=0x7fff09f6d070 "gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2", options=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/block/qcow2.c:1275
        backing_file = <value optimized out>
        backing_fmt = <value optimized out>
        sectors = <value optimized out>
        flags = <value optimized out>
        cluster_size = <value optimized out>
        prealloc = <value optimized out>
        local_err = 0x0
        __PRETTY_FUNCTION__ = "qcow2_create"
#10 0x00007f7be186833d in bdrv_img_create (filename=0x7fff09f6d070 "gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2", fmt=0x7fff09f6d06a "qcow2", base_filename=<value optimized out>, base_fmt=0x0, options=<value optimized out>, img_size=21474836480, flags=64, errp=0x7fff09f6b118) at /usr/src/debug/qemu-kvm-0.12.1.2/block.c:4608
        param = 0x7f7be1ed36a0
        create_options = 0x7f7be1ed3590
        backing_fmt = <value optimized out>
        backing_file = <value optimized out>
        bs = 0x0
        drv = 0x7f7be1ac7760
        proto_drv = <value optimized out>
        backing_drv = 0x0
        ret = <value optimized out>
#11 0x00007f7be18589bf in img_create (argc=<value optimized out>, argv=0x7fff09f6b240) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-img.c:395
        c = <value optimized out>
        img_size = <value optimized out>
        fmt = 0x7fff09f6d06a "qcow2"
        base_fmt = 0x0
        filename = 0x7fff09f6d070 "gluster://gluster-virt-qe-01.qe.lab.eng.nay.redhat.com:0/distdata01/rhel71-64-virtio.qcow2"
        base_filename = 0x0
        options = 0x0
        local_err = 0x0
#12 0x00007f7bdea32d5d in __libc_start_main (main=0x7f7be1858710 <main>, argc=6, ubp_av=0x7fff09f6b238, init=<value optimized out>, fini=<value optimized out>, rtld_fini=<value optimized out>, stack_end=0x7fff09f6b228) at libc-start.c:226
        result = <value optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -7924007216547726701, 140170041326496, 140733360550448, 0, 0, 7923615728312941203, 7854269148090235539}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x1}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <value optimized out>
#13 0x00007f7be1857bc9 in _start ()
No symbol table info available.

Comment 3 Qunfang Zhang 2015-03-23 08:02:52 UTC
Hi, Shuping

Could you check whether this is a regression? Because this should be a common step to test glusterfs and we did not hit it in RHEL6.6. 

Thanks,
Qunfang

Comment 4 mazhang 2015-03-23 09:16:06 UTC
Downgrade gluster client packages re-test this bug, not found core dumped.

qemu-kvm:
qemu-kvm-0.12.1.2-2.458.el6.x86_64
qemu-img-0.12.1.2-2.458.el6.x86_64
qemu-kvm-tools-0.12.1.2-2.458.el6.x86_64
gpxe-roms-qemu-0.9.7-6.13.el6.noarch
qemu-kvm-debuginfo-0.12.1.2-2.458.el6.x86_64

gluster-client(RHEL6.6 GA):
glusterfs-3.6.0.28-2.el6.x86_64
glusterfs-api-3.6.0.28-2.el6.x86_64
glusterfs-libs-3.6.0.28-2.el6.x86_64

gluster-server:
glusterfs-3.6.0.53-1.el6rhs.x86_64
glusterfs-cli-3.6.0.53-1.el6rhs.x86_64
glusterfs-rdma-3.6.0.53-1.el6rhs.x86_64
glusterfs-libs-3.6.0.53-1.el6rhs.x86_64
glusterfs-api-3.6.0.53-1.el6rhs.x86_64
glusterfs-fuse-3.6.0.53-1.el6rhs.x86_64
glusterfs-server-3.6.0.53-1.el6rhs.x86_64
glusterfs-api-devel-3.6.0.53-1.el6rhs.x86_64
glusterfs-debuginfo-3.6.0.53-1.el6rhs.x86_64
glusterfs-devel-3.6.0.53-1.el6rhs.x86_64
glusterfs-geo-replication-3.6.0.53-1.el6rhs.x86_64

So it could be a regression of glusterfs packages.

Comment 5 Jeff Cody 2015-03-24 19:22:52 UTC
(In reply to mazhang from comment #4)
> Downgrade gluster client packages re-test this bug, not found core dumped.
> 
> qemu-kvm:
> qemu-kvm-0.12.1.2-2.458.el6.x86_64
> qemu-img-0.12.1.2-2.458.el6.x86_64
> qemu-kvm-tools-0.12.1.2-2.458.el6.x86_64
> gpxe-roms-qemu-0.9.7-6.13.el6.noarch
> qemu-kvm-debuginfo-0.12.1.2-2.458.el6.x86_64
> 
> gluster-client(RHEL6.6 GA):
> glusterfs-3.6.0.28-2.el6.x86_64
> glusterfs-api-3.6.0.28-2.el6.x86_64
> glusterfs-libs-3.6.0.28-2.el6.x86_64
> 
> gluster-server:
> glusterfs-3.6.0.53-1.el6rhs.x86_64
> glusterfs-cli-3.6.0.53-1.el6rhs.x86_64
> glusterfs-rdma-3.6.0.53-1.el6rhs.x86_64
> glusterfs-libs-3.6.0.53-1.el6rhs.x86_64
> glusterfs-api-3.6.0.53-1.el6rhs.x86_64
> glusterfs-fuse-3.6.0.53-1.el6rhs.x86_64
> glusterfs-server-3.6.0.53-1.el6rhs.x86_64
> glusterfs-api-devel-3.6.0.53-1.el6rhs.x86_64
> glusterfs-debuginfo-3.6.0.53-1.el6rhs.x86_64
> glusterfs-devel-3.6.0.53-1.el6rhs.x86_64
> glusterfs-geo-replication-3.6.0.53-1.el6rhs.x86_64
> 
> So it could be a regression of glusterfs packages.

I tested it here as well, and did not hit it.  Using:

qemu-kvm-0.12.1.2-2.459.el6
glusterfs-server 3.4.1-3.el6
glusterfs 3.5.3-1.fc20  (client-side)

Given that comment #4 shows this goes away once gluster is downgraded, and the backtrace shows the segfault is in libglusterfs, I am reassigning this to the gluster team.

Comment 7 Qunfang Zhang 2015-03-25 04:45:14 UTC
This bug blocks kvm qe to test glusterfs feature, so please help fix it asap. Thanks a lot.

Comment 8 Qunfang Zhang 2015-04-02 05:12:40 UTC
Hi, Bala.FA

Could you share me with the current status of the bug? Could you help check and estimate when we could fix it? 

Thanks,
Qunfang

Comment 9 Stephen Gilson 2015-04-13 19:02:26 UTC
This issue needs to be described in the Release Notes for RHEL 6.7

Content Services needs your input to make that happen. 

Please complete the Doc Text text field for this bug by April 20 using the Cause, Consequence, Workaround, and Result model, as follows:

Cause — Actions or circumstances that cause this bug to occur on a customer's system

Consequence — What happens to the customer's system or application when the bug occurs?

Workaround (if any) — If a workaround for the issue exists, describe in detail. If more than one workaround is available, describe each one.

Result — Describe what happens when a workaround is applied. If the issue is completely circumvented by the workaround, state so. Any side effects caused by the workaround should also be noted here. If no reliable workaround exists, try to describe some preventive measures that help to avoid the bug scenario.

Comment 10 Vivek Agarwal 2015-04-15 06:09:16 UTC
Fix posted for review @http://review.gluster.org/#/c/10205/

Comment 13 Poornima G 2015-04-24 03:25:02 UTC
The next release(or update) of RHS will have the fix for the same.
I do not have the exact dates when would be the next release of RHS.

As per the upstream fix, will target to get it merged by 30/4/2015.

Comment 14 Qunfang Zhang 2015-04-24 03:31:39 UTC
(In reply to Poornima G from comment #13)
> The next release(or update) of RHS will have the fix for the same.
> I do not have the exact dates when would be the next release of RHS.
> 
> As per the upstream fix, will target to get it merged by 30/4/2015.

Okay, thanks for the effort and feedback!

Comment 26 SATHEESARAN 2015-06-11 11:01:07 UTC
Verified with glusterfs-3.6.0.54-1.el6rhs

'qemu-img create' doesn't hit segmentation fault
VM Images ( qcow2, raw ) are created successfully

Environment : Tested in following environment,
1. RHEL 6.7 nightly ( RHEL-6.7-20150603.n.0 )
2. RHEL 6.6 + rhs client channel ( repo - rhel-6-server-rhs-client-1-rpms ) + glusterfs-3.6.0.54-1.el6rhs


Marking this bug as VERIFIED

Comment 27 errata-xmlrpc 2015-07-22 07:19:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0683.html


Note You need to log in before you can comment on or make changes to this bug.