Bug 1230523 - glusterd: glusterd crashing if you run re-balance and vol status command parallely.
Summary: glusterd: glusterd crashing if you run re-balance and vol status command pa...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1229139 1230525
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-06-11 06:42 UTC by Anand Nekkunti
Modified: 2016-01-04 04:50 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.3
Doc Type: Bug Fix
Doc Text:
Clone Of: 1229139
Environment:
Last Closed: 2015-07-30 09:49:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Anand Nekkunti 2015-06-11 06:42:07 UTC
+++ This bug was initially created as a clone of Bug #1229139 +++

Description of problem:
glusterd: glusterd crashing if you run  re-balance and vol status  command parallely (compilied in debug mode).


Version-Release number of selected component (if applicable):


How reproducible:
Most of the times


Steps to Reproduce:
1.compile in glusterfs  debug mode (./configure --enable-debug)
2.gluster peer probe 46.101.184.191

gluster volume create livebackup replica 2 transport tcp 46.101.160.245:/opt/gluster_brick1 46.101.184.191:/opt/gluster_brick2 force
gluster volume start livebackup
gluster volume add-brick livebackup 46.101.160.245:/opt/gluster_brick2 46.101.184.191:/opt/gluster_brick1 force

gluster volume info

Volume Name: livebackup
Type: Distributed-Replicate
Volume ID: 55cf62a0-099f-4a5e-ae4a-0ddec29239b4
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 46.101.160.245:/opt/gluster_brick1
Brick2: 46.101.184.191:/opt/gluster_brick2
Brick3: 46.101.160.245:/opt/gluster_brick2
Brick4: 46.101.184.191:/opt/gluster_brick1
Options Reconfigured:
performance.readdir-ahead: on

mount -t glusterfs localhost:/livebackup /mnt

cp /var/log/* /mnt

gluster volume rebalance livebackup  start

In node 2:
gluster volume status

Actual results:
glusterd crashing.
Expected results:
glusterd should not crash.



(gdb) bt
#0  0x0000003c000348c7 in raise () from /lib64/libc.so.6
#1  0x0000003c0003652a in abort () from /lib64/libc.so.6
#2  0x0000003c0002d46d in __assert_fail_base () from /lib64/libc.so.6
#3  0x0000003c0002d522 in __assert_fail () from /lib64/libc.so.6
#4  0x00007fc09938d0d5 in glusterd_volume_rebalance_use_rsp_dict (aggr=0x0, rsp_dict=0x7fc08800b68c)
    at glusterd-utils.c:7776
#5  0x00007fc0993969b4 in __glusterd_commit_op_cbk (req=0x7fc08800f1cc, iov=0x7fc08800f20c, count=1, 
    myframe=0x7fc08800f0b4) at glusterd-rpc-ops.c:1333
#6  0x00007fc099393cee in glusterd_big_locked_cbk (req=0x7fc08800f1cc, iov=0x7fc08800f20c, count=1, 
    myframe=0x7fc08800f0b4, fn=0x7fc099396419 <__glusterd_commit_op_cbk>) at glusterd-rpc-ops.c:207
#7  0x00007fc099396a9a in glusterd_commit_op_cbk (req=0x7fc08800f1cc, iov=0x7fc08800f20c, count=1, 
    myframe=0x7fc08800f0b4) at glusterd-rpc-ops.c:1371
#8  0x00007fc0a2ebdc1b in rpc_clnt_handle_reply (clnt=0xaf58b0, pollin=0x7fc08800a7a0) at rpc-clnt.c:761
#9  0x00007fc0a2ebe010 in rpc_clnt_notify (trans=0xaf5d20, mydata=0xaf58e0, event=RPC_TRANSPORT_MSG_RECEIVED, 
    data=0x7fc08800a7a0) at rpc-clnt.c:889
#10 0x00007fc0a2eba69a in rpc_transport_notify (this=0xaf5d20, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fc08800a7a0)
    at rpc-transport.c:538
#11 0x00007fc097df912c in socket_event_poll_in (this=0xaf5d20) at socket.c:2285
#12 0x00007fc097df95d8 in socket_event_handler (fd=12, idx=2, data=0xaf5d20, poll_in=1, poll_out=0, poll_err=0)
    at socket.c:2398
#13 0x00007fc0a3168146 in event_dispatch_epoll_handler (event_pool=0xa77ca0, event=0x7fc096dbcea0)
    at event-epoll.c:567
#14 0x00007fc0a3168499 in event_dispatch_epoll_worker (data=0xa82140) at event-epoll.c:669
#15 0x0000003c0040752a in start_thread () from /lib64/libpthread.so.0
#16 0x0000003c0010079d in clone () from /lib64/libc.so.6

--- Additional comment from Anand Avati on 2015-06-08 05:08:05 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based on trans_id in op_sm call backs.) posted (#1) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-06-08 05:09:58 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based on trans_id in op_sm call backs.) posted (#2) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-06-08 05:11:28 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based on trans_id in op_sm call backs.) posted (#3) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-06-08 09:41:32 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based on trans_id in op_sm call backs.) posted (#4) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-06-09 08:46:15 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based on trans_id in op_sm call backs.) posted (#5) for review on master by Anand Nekkunti (anekkunt)

--- Additional comment from Anand Avati on 2015-06-10 15:05:39 EDT ---

REVIEW: http://review.gluster.org/11120 (glusterd: Get the local txn_info based on trans_id in op_sm call backs.) posted (#7) for review on master by Anand Nekkunti (anekkunt)

Comment 1 Anand Avati 2015-07-07 08:44:48 UTC
REVIEW: http://review.gluster.org/11557 (glusterd: Get the local txn_info based on trans_id in op_sm call backs.) posted (#2) for review on release-3.7 by Anand Nekkunti (anekkunt)

Comment 2 Anand Avati 2015-07-09 11:30:36 UTC
REVIEW: http://review.gluster.org/11557 (glusterd: Get the local txn_info based on trans_id in op_sm call backs.) posted (#3) for review on release-3.7 by Anand Nekkunti (anekkunt)

Comment 3 Kaushal 2015-07-30 09:49:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.