Bug 1215787 - [HC] qcow2 image creation using qemu-img hits segmentation fault
Summary: [HC] qcow2 image creation using qemu-img hits segmentation fault
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: 3.7.0
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
Assignee: Poornima G
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On: 1210137 1210934
Blocks: Hosted_Engine_HC glusterfs-3.7.0
TreeView+ depends on / blocked
 
Reported: 2015-04-27 17:36 UTC by Poornima G
Modified: 2015-12-01 16:45 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1210137
Environment:
virt gluster integration
Last Closed: 2015-05-14 17:29:30 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Poornima G 2015-04-27 17:36:40 UTC
+++ This bug was initially created as a clone of Bug #1210137 +++

Description of problem:
-----------------------
qcow2 image creation using qemu-img hits segmentation fault

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.7 nightly build (glusterfs-api-3.7dev-0.929.git057d2be.el7.centos.x86_64)

RHEL 7.1 [ qemu-kvm-1.5.3-86.el7_1.1.x86_64, qemu-img-1.5.3-86.el7_1.1.x86_64 ]

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Create a qcow2 image file
(i.e) qemu-img create -f qcow2 gluster://<gluster-server>/<vol-name>/<image> <size>

Actual results:
---------------
qemu-img command hits segmentation fault

Expected results:
----------------
Image file should be created successfully

Additional info:
-----------------
[root@rhs-client15 ~]# qemu-img create -f qcow2 gluster://root.37.113/repv/vm6.img 25G
Formatting 'gluster://root.37.113/repv/vm6.img', fmt=qcow2 size=26843545600 encryption=off cluster_size=65536 lazy_refcounts=off 
[2015-04-09 02:36:33.950249] E [glfs.c:1011:pub_glfs_fini] 0-glfs: call_pool_cnt - 0,pin_refcnt - 0
[2015-04-09 02:36:33.950423] E [MSGID: 108006] [afr-common.c:3789:afr_notify] 0-repv-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2015-04-09 02:36:33.951513] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f6742109516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f6744bb1493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f6744bb47dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7f6742105dc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7f6742105d55] ))))) 0-rpc_transport: invalid argument: this
[2015-04-09 02:36:33.951696] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f6742109516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f6744bb1493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f6744bb47dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7f6742105dc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7f6742105d55] ))))) 0-rpc_transport: invalid argument: this
[2015-04-09 02:36:33.951858] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f6742109516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f6744bb1493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f6744bb47dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7f6742105dc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7f6742105d55] ))))) 0-rpc_transport: invalid argument: this
Segmentation fault

--- Additional comment from SATHEESARAN on 2015-04-08 22:32:33 EDT ---

I did a strace while executing the qemu-img command and got the following :
open("gluster://root.37.113/repv/vm6.img", O_RDONLY|O_NONBLOCK|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("gluster://root.37.113/repv/vm6.img", O_RDONLY|O_NONBLOCK|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("gluster://root.37.113/repv/vm6.img", 0x7f07cb6c4d20) = -1 ENOENT (No such file or directory)
uname({sys="Linux", node="rhs-client15.lab.eng.blr.redhat.com", ...}) = 0
mmap(NULL, 131072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07cb5a5000
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07cb565000
mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07c586f000
mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07c576f000
mmap(NULL, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07c556f000
mmap(NULL, 4194304, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07c516f000
mmap(NULL, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07c4f6f000
mmap(NULL, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f07c4d6f000
epoll_create(16384)                     = 6
rt_sigprocmask(SIG_BLOCK, ~[ILL ABRT BUS FPE SEGV SYS RTMIN RT_1], [BUS USR1 ALRM IO], 8) = 0
clone(child_stack=0x7f07bcfb0f70, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f07bcfb19d0, tls=0x7f07bcfb1700, child_tidptr=0x7f07bcfb19d0) = 11387
rt_sigprocmask(SIG_SETMASK, [BUS USR1 ALRM IO], Process 11387 attached
NULL, 8) = 0
[pid 11380] rt_sigprocmask(SIG_BLOCK, ~[ILL ABRT BUS FPE SEGV SYS RTMIN RT_1],  <unfinished ...>
[pid 11387] set_robust_list(0x7f07bcfb19e0, 24 <unfinished ...>
[pid 11380] <... rt_sigprocmask resumed> [BUS USR1 ALRM IO], 8) = 0
[pid 11380] clone( <unfinished ...>
[pid 11387] <... set_robust_list resumed> ) = 0
[pid 11380] <... clone resumed> child_stack=0x7f07bbdabf70, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f07bbdac9d0, tls=0x7f07bbdac700, child_tidptr=0x7f07bbdac9d0) = 11388
Process 11388 attached
[pid 11380] rt_sigprocmask(SIG_SETMASK, [BUS USR1 ALRM IO],  <unfinished ...>
[pid 11387] futex(0x7f07cd32a91c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, {1428548144, 0}, ffffffff <unfinished ...>
[pid 11380] <... rt_sigprocmask resumed> NULL, 8) = 0
[pid 11380] brk(0)                      = 0x7f07cd379000
[pid 11380] brk(0x7f07cd433000 <unfinished ...>
[pid 11388] set_robust_list(0x7f07bbdac9e0, 24 <unfinished ...>
[pid 11380] <... brk resumed> )         = 0x7f07cd433000
[pid 11388] <... set_robust_list resumed> ) = 0
[pid 11388] futex(0x7f07cd32a91c, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 2, {1428548144, 0}, ffffffff <unfinished ...>
[pid 11380] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x8} ---
[pid 11388] +++ killed by SIGSEGV +++
[pid 11387] +++ killed by SIGSEGV +++
+++ killed by SIGSEGV +++
Segmentation fault

--- Additional comment from SATHEESARAN on 2015-04-08 22:37:53 EDT ---

[root@rhs-client15 rpms]# which qemu-img
/usr/bin/qemu-img

[root@rhs-client15 rpms]# gdb /usr/bin/qemu-img
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-64.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/bin/qemu-img...Reading symbols from /usr/lib/debug/usr/bin/qemu-img.debug...done.
done.
(gdb) 
(gdb) r create -f qcow2 gluster://10.70.37.113/repv/vm6.img 30G
Starting program: /usr/bin/qemu-img create -f qcow2 gluster://10.70.37.113/repv/vm6.img 30G
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Formatting 'gluster://10.70.37.113/repv/vm6.img', fmt=qcow2 size=32212254720 encryption=off cluster_size=65536 lazy_refcounts=off 
[New Thread 0x7ffff1684700 (LWP 11398)]
[New Thread 0x7ffff0e83700 (LWP 11399)]
[New Thread 0x7fffe98c7700 (LWP 11400)]
[New Thread 0x7fffe8ec3700 (LWP 11401)]
[New Thread 0x7fffe3fff700 (LWP 11402)]
[New Thread 0x7fffe0a8f700 (LWP 11403)]
[2015-04-09 02:50:34.040083] E [glfs.c:1011:pub_glfs_fini] 0-glfs: call_pool_cnt - 0,pin_refcnt - 0
[2015-04-09 02:50:34.040245] E [MSGID: 108006] [afr-common.c:3789:afr_notify] 0-repv-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2015-04-09 02:50:34.041084] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7ffff48d3516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7ffff737b493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7ffff737e7dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7ffff48cfdc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7ffff48cfd55] ))))) 0-rpc_transport: invalid argument: this
[2015-04-09 02:50:34.041260] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7ffff48d3516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7ffff737b493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7ffff737e7dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7ffff48cfdc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7ffff48cfd55] ))))) 0-rpc_transport: invalid argument: this
[2015-04-09 02:50:34.041435] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7ffff48d3516] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7ffff737b493] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7ffff737e7dc] (--> /lib64/libglusterfs.so.0(+0x1edc1)[0x7ffff48cfdc1] (--> /lib64/libglusterfs.so.0(+0x1ed55)[0x7ffff48cfd55] ))))) 0-rpc_transport: invalid argument: this
[Thread 0x7ffff0e83700 (LWP 11399) exited]
[Thread 0x7ffff1684700 (LWP 11398) exited]
[Thread 0x7fffe0a8f700 (LWP 11403) exited]
[Thread 0x7fffe3fff700 (LWP 11402) exited]
[Thread 0x7fffe8ec3700 (LWP 11401) exited]
[Thread 0x7fffe98c7700 (LWP 11400) exited]
[New Thread 0x7fffe98c7700 (LWP 11404)]
[New Thread 0x7fffe3fff700 (LWP 11405)]

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff4900c35 in list_add (head=0x555555c0af10, new=0x555555c47c28) at list.h:33
33              new->next->prev = new;
Missing separate debuginfos, use: debuginfo-install boost-system-1.53.0-23.el7.x86_64 boost-thread-1.53.0-23.el7.x86_64 glib2-2.40.0-4.el7.x86_64 glibc-2.17-78.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.12.2-14.el7.x86_64 libacl-2.2.51-12.el7.x86_64 libaio-0.3.109-12.el7.x86_64 libattr-2.4.46-12.el7.x86_64 libcom_err-1.42.9-7.el7.x86_64 libgcc-4.8.3-9.el7.x86_64 libgcrypt-1.5.3-12.el7.x86_64 libgpg-error-1.12-3.el7.x86_64 libiscsi-1.9.0-6.el7.x86_64 librados2-0.80.7-2.el7.x86_64 librbd1-0.80.7-2.el7.x86_64 libselinux-2.2.2-6.el7.x86_64 libstdc++-4.8.3-9.el7.x86_64 libuuid-2.23.2-21.el7.x86_64 nspr-4.10.6-3.el7.x86_64 nss-3.16.2.3-5.el7.x86_64 nss-util-3.16.2.3-2.el7.x86_64 openssl-libs-1.0.1e-42.el7_1.4.x86_64 pcre-8.32-14.el7.x86_64 xz-libs-5.1.2-9alpha.el7.x86_64 zlib-1.2.7-13.el7.x86_64
(gdb) bt
#0  0x00007ffff4900c35 in list_add (head=0x555555c0af10, new=0x555555c47c28) at list.h:33
#1  mem_pool_new_fn (sizeof_type=sizeof_type@entry=144, count=count@entry=4096, name=name@entry=0x7ffff759f794 "call_frame_t") at mem-pool.c:385
#2  0x00007ffff7590c1d in glusterfs_ctx_defaults_init (ctx=0x555555c2cb90) at glfs.c:116
#3  pub_glfs_new (volname=0x555555c47c60 "repv") at glfs.c:606
#4  0x000055555556d9c0 in qemu_gluster_init (gconf=gconf@entry=0x555555c47f70, filename=<optimized out>) at block/gluster.c:199
#5  0x000055555556dc53 in qemu_gluster_open (bs=<optimized out>, options=0x555555c2bb70, bdrv_flags=66, errp=<optimized out>) at block/gluster.c:341
#6  0x0000555555564870 in bdrv_open_common (bs=bs@entry=0x555555c29960, file=file@entry=0x0, options=options@entry=0x555555c2bb70, flags=flags@entry=2, 
    drv=drv@entry=0x5555557fe3a0 <bdrv_gluster>, errp=0x7ffff7fd8eb0) at block.c:829
#7  0x0000555555569464 in bdrv_file_open (pbs=pbs@entry=0x7ffff7fd8f48, filename=filename@entry=0x555555c0a760 "gluster://10.70.37.113/repv/vm6.img", 
    options=0x555555c2bb70, options@entry=0x0, flags=flags@entry=2, errp=errp@entry=0x7ffff7fd8f50) at block.c:959
#8  0x000055555557ca90 in qcow2_create2 (errp=0x7ffff7fd8f40, version=3, prealloc=<optimized out>, cluster_size=65536, flags=0, backing_format=0x0, backing_file=0x0, 
    total_size=62914560, filename=0x555555c0a760 "gluster://10.70.37.113/repv/vm6.img") at block/qcow2.c:1660
#9  qcow2_create (filename=0x555555c0a760 "gluster://10.70.37.113/repv/vm6.img", options=<optimized out>, errp=0x7ffff7fd8fa0) at block/qcow2.c:1839
#10 0x0000555555563409 in bdrv_create_co_entry (opaque=0x7fffffffe1b0) at block.c:393
#11 0x000055555559af2a in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at coroutine-ucontext.c:118
#12 0x00007ffff502e0f0 in ?? () from /lib64/libc.so.6
#13 0x00007fffffffda20 in ?? ()
#14 0x0000000000000000 in ?? ()

--- Additional comment from Poornima G on 2015-04-13 02:50:03 EDT ---

Fix posted for review @http://review.gluster.org/#/c/10205/

--- Additional comment from Poornima G on 2015-04-13 04:12:05 EDT ---



--- Additional comment from Poornima G on 2015-04-13 04:21:32 EDT ---

Comment 1 Anand Avati 2015-04-27 17:46:28 UTC
REVIEW: http://review.gluster.org/10413 (libgfapi: Assign corect value to THIS) posted (#1) for review on release-3.7 by Poornima G (pgurusid)

Comment 2 Anand Avati 2015-04-27 18:01:24 UTC
REVIEW: http://review.gluster.org/10414 (libgfapi: Store and restore THIS in every API exposed by libgfapi) posted (#1) for review on release-3.7 by Poornima G (pgurusid)

Comment 3 Anand Avati 2015-05-05 10:19:14 UTC
REVIEW: http://review.gluster.org/10413 (libgfapi: Assign corect value to THIS) posted (#2) for review on release-3.7 by Poornima G (pgurusid)

Comment 4 Anand Avati 2015-05-05 10:19:57 UTC
REVIEW: http://review.gluster.org/10413 (libgfapi: Assign corect value to THIS) posted (#3) for review on release-3.7 by Poornima G (pgurusid)

Comment 5 Niels de Vos 2015-05-08 22:41:24 UTC
Needs a backport of http://review.gluster.org/9797

Comment 6 Niels de Vos 2015-05-09 08:54:12 UTC
(In reply to Niels de Vos from comment #5)
> Needs a backport of http://review.gluster.org/9797

Which has already been posted as http://review.gluster.org/10414 , but is based on an older version of the patch.

Comment 7 Anand Avati 2015-05-09 09:08:08 UTC
REVIEW: http://review.gluster.org/10413 (libgfapi: Assign corect value to THIS) posted (#4) for review on release-3.7 by Niels de Vos (ndevos)

Comment 8 Anand Avati 2015-05-09 18:22:30 UTC
REVIEW: http://review.gluster.org/10730 (gfapi: fix compile warning in pub_glfs_h_access()) posted (#1) for review on release-3.7 by Niels de Vos (ndevos)

Comment 9 Anand Avati 2015-05-09 22:52:14 UTC
COMMIT: http://review.gluster.org/10730 committed in release-3.7 by Niels de Vos (ndevos) 
------
commit 34db0de2c12a1a802580fc308aa2f2b11a9d586f
Author: Niels de Vos <ndevos>
Date:   Sat May 9 19:56:07 2015 +0200

    gfapi: fix compile warning in pub_glfs_h_access()
    
    While compiling libgfapi, the following warning is reported:
    
        Making all in src
          CC       libgfapi_la-glfs-handleops.lo
        In file included from glfs-handleops.c:12:0:
        glfs-handleops.c: In function 'pub_glfs_h_access':
        glfs-internal.h:216:14: warning: 'old_THIS' may be used uninitialized in this function [-Wmaybe-uninitialized]
                 THIS = old_THIS;                                            \
                      ^
        glfs-internal.h:202:36: note: 'old_THIS' was declared here
         #define DECLARE_OLD_THIS xlator_t *old_THIS = NULL
                                            ^
        glfs-handleops.c:1159:2: note: in expansion of macro 'DECLARE_OLD_THIS'
          DECLARE_OLD_THIS;
          ^
          CCLD     libgfapi.la
          CCLD     api.la
    
    The DECLARE_OLD_THIS macro should be done with the declarations of all
    the other variables used in this function. Moving the macro further up
    in the function prevents this warning.
    
    Backport of:
    > Change-Id: I2bedc1aa074893ae3e2c933abc5a167ab5b55f41
    > BUG: 1210934
    > Reviewed-on: http://review.gluster.org/10728
    > Reported-by: Shyamsundar Ranganathan <srangana>
    > Signed-off-by: Niels de Vos <ndevos>
    
    Change-Id: I2bedc1aa074893ae3e2c933abc5a167ab5b55f41
    BUG: 1215787
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/10730
    Reviewed-by: Shyamsundar Ranganathan <srangana>
    Tested-by: Gluster Build System <jenkins.com>

Comment 10 Niels de Vos 2015-05-14 17:29:30 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 11 Niels de Vos 2015-05-14 17:35:57 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 12 Niels de Vos 2015-05-14 17:38:18 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 13 Niels de Vos 2015-05-14 17:46:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.