Bug 1230647 - Disperse volume : client crashed while running IO
Summary: Disperse volume : client crashed while running IO
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On: 1230522
Blocks: 1223636 1230653
TreeView+ depends on / blocked
 
Reported: 2015-06-11 09:24 UTC by Pranith Kumar K
Modified: 2016-06-16 13:10 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of: 1230522
: 1230653 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:10:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Anand Avati 2015-06-12 18:39:35 UTC
COMMIT: http://review.gluster.org/11178 committed in master by Vijay Bellur (vbellur) 
------
commit b9603a046116e7db29e16e7caed29018bff50f66
Author: Pranith Kumar K <pkarampu>
Date:   Thu Jun 11 14:44:48 2015 +0530

    cluster/ec: Prevent Null dereference in dht-rename
    
    Change-Id: I3059f3b577f550c92fb77c6b6b44defd0584cd2e
    BUG: 1230647
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/11178
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 2 Mike McCune 2016-03-28 22:22:31 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 3 Niels de Vos 2016-05-10 12:45:18 UTC
Please add a public description of this bug.

Comment 4 Pranith Kumar K 2016-05-11 05:52:05 UTC
Description of problem:
=======================

On a fuse mount, files creation, linux untar and renames are being done.

Backtrace:
==========
(gdb) bt
#0  0x00007f691b27f643 in ec_get_real_size ()
   from /usr/lib64/glusterfs/3.7.1/xlator/cluster/disperse.so
#1  0x00007f691b281ee9 in ec_locked ()
   from /usr/lib64/glusterfs/3.7.1/xlator/cluster/disperse.so
#2  0x00007f691b289519 in ec_manager_inodelk ()
   from /usr/lib64/glusterfs/3.7.1/xlator/cluster/disperse.so
#3  0x00007f691b27ee04 in __ec_manager ()
   from /usr/lib64/glusterfs/3.7.1/xlator/cluster/disperse.so
#4  0x00007f691b27ec61 in ec_resume ()
   from /usr/lib64/glusterfs/3.7.1/xlator/cluster/disperse.so
#5  0x00007f691b29b9a6 in ec_combine ()
   from /usr/lib64/glusterfs/3.7.1/xlator/cluster/disperse.so
#6  0x00007f691b287d18 in ec_inodelk_cbk ()
   from /usr/lib64/glusterfs/3.7.1/xlator/cluster/disperse.so
#7  0x00007f691b4f1455 in client3_3_inodelk_cbk ()
   from /usr/lib64/glusterfs/3.7.1/xlator/protocol/client.so
#8  0x0000003cbca0ed75 in rpc_clnt_handle_reply ()
   from /usr/lib64/libgfrpc.so.0
#9  0x0000003cbca10212 in rpc_clnt_notify () from /usr/lib64/libgfrpc.so.0
#10 0x0000003cbca0b8e8 in rpc_transport_notify ()
   from /usr/lib64/libgfrpc.so.0
#11 0x00007f691c75fbcd in ?? ()
   from /usr/lib64/glusterfs/3.7.1/rpc-transport/socket.so
#12 0x00007f691c7616fd in ?? ()
   from /usr/lib64/glusterfs/3.7.1/rpc-transport/socket.so
#13 0x0000003cbc280f70 in ?? () from /usr/lib64/libglusterfs.so.0
#14 0x0000003cbba07a51 in start_thread () from /lib64/libpthread.so.0
#15 0x0000003cbb6e896d in clone () from /lib64/libc.so.6
(gdb) q

How reproducible:
=================
seen once


Steps to Reproduce:
1. created a 8+4 disperse volume and create files, directories and linux untar
2. converted to distributed disperse volume and run rebalance 
3. While rebalance is in progres, continue creating files, renames, linux untar and directories.

Actual results:
===============
client crashed

Expected results:

Comment 5 Niels de Vos 2016-06-16 13:10:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.