Bug 810106

Summary: memory corruption led rebalance process to crash
Product: [Community] GlusterFS Reporter: Shwetha Panduranga <shwetha.h.panduranga>
Component: distributeAssignee: shishir gowda <sgowda>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: gluster-bugs, nsathyan
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.4.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-07-24 17:23:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 817967    

Description Shwetha Panduranga 2012-04-05 07:43:06 UTC
Description of problem:
Core was generated by `/usr/local/sbin/glusterfs -s localhost --volfile-id dstore --xlator-option *dht'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f4c0bbb3eb1 in get_new_dict_full (size_hint=1) at dict.c:68
68	        dict_t *dict = mem_get0 (THIS->ctx->dict_pool);
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6.x86_64 libgcc-4.4.6-3.el6.x86_64

(gdb) info threads
  22 Thread 0x7f4bdb5fe700 (LWP 15050)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  21 Thread 0x7f4be2bf5700 (LWP 15045)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  20 Thread 0x7f4be49f8700 (LWP 15038)  0x00000032f1e82b54 in __memset_sse2 () from /lib64/libc.so.6
  19 Thread 0x7f4bed5fa700 (LWP 15036)  0x00000032f1e82b54 in __memset_sse2 () from /lib64/libc.so.6
  18 Thread 0x7f4c09b24700 (LWP 12755)  0x00000032f1ee56b5 in clone () from /lib64/libc.so.6
  17 Thread 0x7f4bdbfff700 (LWP 15049)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  16 Thread 0x7f4be17f3700 (LWP 15047)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  15 Thread 0x7f4be0df2700 (LWP 15048)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
* 14 Thread 0x7f4c0a525700 (LWP 12754)  0x00000032f1e36090 in __cxa_finalize () from /lib64/libc.so.6
  13 Thread 0x7f4be21f4700 (LWP 15046)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  12 Thread 0x7f4be35f6700 (LWP 15044)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  11 Thread 0x7f4be3ff7700 (LWP 15043)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  10 Thread 0x7f4be61fc700 (LWP 15041)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  9 Thread 0x7f4be7fff700 (LWP 15037)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  8 Thread 0x7f4c0b75d700 (LWP 12753)  0x00000032f1ee5d03 in epoll_wait () from /lib64/libc.so.6
  7 Thread 0x7f4bedffb700 (LWP 15035)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  6 Thread 0x7f4be6bfd700 (LWP 15040)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  5 Thread 0x7f4be75fe700 (LWP 15039)  0x00000032f220dff4 in __lll_lock_wait () from /lib64/libpthread.so.0
  4 Thread 0x7f4c06544700 (LWP 12759)  0x00000032f220b3dc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
  3 Thread 0x7f4c05b43700 (LWP 12760)  0x00000032f220b3dc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
  2 Thread 0x7f4c082f5700 (LWP 12757)  0x00000032f220eccd in nanosleep () from /lib64/libpthread.so.0
  1 Thread 0x7f4c09123700 (LWP 12756)  0x00007f4c0bbb3eb1 in get_new_dict_full (size_hint=1) at dict.c:68
(gdb) fr 1
#1  0x00007f4c076b50e6 in __do_global_dtors_aux () from /usr/local/lib/glusterfs/3.3.0qa33/xlator/protocol/client.so
(gdb) t 1
[Switching to thread 1 (Thread 0x7f4c09123700 (LWP 12756))]#0  0x00007f4c0bbb3eb1 in get_new_dict_full (size_hint=1) at dict.c:68
68	        dict_t *dict = mem_get0 (THIS->ctx->dict_pool);
(gdb) bt
#0  0x00007f4c0bbb3eb1 in get_new_dict_full (size_hint=1) at dict.c:68
#1  0x00007f4c0bbb3f6f in dict_new () at dict.c:98
#2  0x00007f4c071ebf92 in dht_vgetxattr_cbk (frame=0x7f4c0a9ee04c, cookie=0x7f4c0a9ee0f8, this=0x2150b00, op_ret=-1, op_errno=107, xattr=0x0, xdata=0x0)
    at dht-common.c:1757
#3  0x00007f4c07440863 in afr_getxattr_node_uuid_cbk (frame=0x7f4c0a9ee0f8, cookie=0x2, this=0x214e660, op_ret=-1, op_errno=107, dict=0x0, xdata=0x0)
    at afr-inode-read.c:773
#4  0x00007f4c076c7c5d in client3_1_getxattr_cbk (req=0x7f4bfe3ae5c0, iov=0x0, count=0, myframe=0x7f4c0a9efbd8) at client3_1-fops.c:1057
#5  0x00007f4c076b5e4f in client_submit_request (this=0x2147510, req=0x7f4bfe3af030, frame=0x7f4c0a9efbd8, prog=0x7f4c078f3ce0, procnum=18, 
    cbkfn=0x7f4c076c756f <client3_1_getxattr_cbk>, iobref=0x0, rsphdr=0x7f4bfe3aef30, rsphdr_count=1, rsp_payload=0x0, rsp_payload_count=0, 
    rsp_iobref=0x7f4bf001be60, xdrproc=0x7f4c0b77614d <xdr_gfs3_getxattr_req>) at client.c:272
#6  0x00007f4c076db258 in client3_1_getxattr (frame=0x7f4c0a9efbd8, this=0x2147510, data=0x7f4bfe3af130) at client3_1-fops.c:4681
#7  0x00007f4c076bb8ae in client_getxattr (frame=0x7f4c0a9efbd8, this=0x2147510, loc=0x7f4c040cfc98, name=0x7f4bf001dc20 "trusted.glusterfs.node-uuid", xdata=0x0)
    at client.c:1461
#8  0x00007f4c074406c1 in afr_getxattr_node_uuid_cbk (frame=0x7f4c0a9ee0f8, cookie=0x1, this=0x214e660, op_ret=-1, op_errno=107, dict=0x0, xdata=0x0)
    at afr-inode-read.c:762
#9  0x00007f4c076c7c5d in client3_1_getxattr_cbk (req=0x7f4bff2151c4, iov=0x0, count=0, myframe=0x7f4c0a9efb2c) at client3_1-fops.c:1057
#10 0x00007f4c0b9995aa in rpc_clnt_submit (rpc=0x228fd30, prog=0x7f4c078f3ce0, procnum=18, cbkfn=0x7f4c076c756f <client3_1_getxattr_cbk>, proghdr=0x7f4bfe3afe30, 
    proghdrcount=1, progpayload=0x0, progpayloadcount=0, iobref=0x7f4bf001d7f0, frame=0x7f4c0a9efb2c, rsphdr=0x7f4bfe3aff00, rsphdr_count=1, rsp_payload=0x0, 
    rsp_payload_count=0, rsp_iobref=0x7f4bf001d9f0) at rpc-clnt.c:1542
#11 0x00007f4c076b5d2d in client_submit_request (this=0x2146770, req=0x7f4bfe3b0000, frame=0x7f4c0a9efb2c, prog=0x7f4c078f3ce0, procnum=18, 
    cbkfn=0x7f4c076c756f <client3_1_getxattr_cbk>, iobref=0x0, rsphdr=0x7f4bfe3aff00, rsphdr_count=1, rsp_payload=0x0, rsp_payload_count=0, 
    rsp_iobref=0x7f4bf001d9f0, xdrproc=0x7f4c0b77614d <xdr_gfs3_getxattr_req>) at client.c:238
#12 0x00007f4c076db258 in client3_1_getxattr (frame=0x7f4c0a9efb2c, this=0x2146770, data=0x7f4bfe3b0100) at client3_1-fops.c:4681
#13 0x00007f4c076bb8ae in client_getxattr (frame=0x7f4c0a9efb2c, this=0x2146770, loc=0x7f4c040cfc98, name=0x7f4bf001dc20 "trusted.glusterfs.node-uuid", xdata=0x0)
    at client.c:1461
#14 0x00007f4c074406c1 in afr_getxattr_node_uuid_cbk (frame=0x7f4c0a9ee0f8, cookie=0x0, this=0x214e660, op_ret=-1, op_errno=107, dict=0x0, xdata=0x0)
    at afr-inode-read.c:762
#15 0x00007f4c076c7c5d in client3_1_getxattr_cbk (req=0x7f4bfe3b0430, iov=0x0, count=0, myframe=0x7f4c0a9efa80) at client3_1-fops.c:1057
#16 0x00007f4c076b5e4f in client_submit_request (this=0x21459d0, req=0x7f4bfe3b0ea0, frame=0x7f4c0a9efa80, prog=0x7f4c078f3ce0, procnum=18, 
    cbkfn=0x7f4c076c756f <client3_1_getxattr_cbk>, iobref=0x0, rsphdr=0x7f4bfe3b0da0, rsphdr_count=1, rsp_payload=0x0, rsp_payload_count=0, 
    rsp_iobref=0x7f4bf001bca0, xdrproc=0x7f4c0b77614d <xdr_gfs3_getxattr_req>) at client.c:272
#17 0x00007f4c076db258 in client3_1_getxattr (frame=0x7f4c0a9efa80, this=0x21459d0, data=0x7f4bfe3b0fa0) at client3_1-fops.c:4681
#18 0x00007f4c076bb8ae in client_getxattr (frame=0x7f4c0a9efa80, this=0x21459d0, loc=0x7f4bfe3b1790, name=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at client.c:1461
#19 0x00007f4c07441cd2 in afr_getxattr (frame=0x7f4c0a9ee0f8, this=0x214e660, loc=0x7f4bfe3b1790, name=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at afr-inode-read.c:1027
---Type <return> to continue, or q <return> to quit---
#20 0x00007f4c071ed80b in dht_getxattr (frame=0x7f4c0a9ee04c, this=0x2150b00, loc=0x7f4bfe3b1790, key=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at dht-common.c:1940
#21 0x00007f4c0bc04dfa in syncop_getxattr (subvol=0x2150b00, loc=0x7f4bfe3b1790, dict=0x7f4bfe3b16d8, key=0x7f4c07214f39 "trusted.glusterfs.node-uuid")
    at syncop.c:793
#22 0x00007f4c071d4082 in gf_defrag_migrate_data (this=0x2150b00, defrag=0x216e400, loc=0x7f4bfe3b1b90, migrate_data=0x210e174) at dht-rebalance.c:1147
#23 0x00007f4c071d4791 in gf_defrag_fix_layout (this=0x2150b00, defrag=0x216e400, loc=0x7f4bfe3b1b90, fix_layout=0x210e2c4, migrate_data=0x210e174)
    at dht-rebalance.c:1275
#24 0x00007f4c071d4d80 in gf_defrag_fix_layout (this=0x2150b00, defrag=0x216e400, loc=0x7f4bfe3b1d90, fix_layout=0x210e2c4, migrate_data=0x210e174)
    at dht-rebalance.c:1364
#25 0x00007f4c071d4d80 in gf_defrag_fix_layout (this=0x2150b00, defrag=0x216e400, loc=0x7f4bfe3b1f30, fix_layout=0x210e2c4, migrate_data=0x210e174)
    at dht-rebalance.c:1364
#26 0x00007f4c071d525f in gf_defrag_start_crawl (data=0x2150b00) at dht-rebalance.c:1471
#27 0x00007f4c0bc01286 in synctask_wrap (old_task=0x7f4bf80008e0) at syncop.c:128
#28 0x00000032f1e43610 in ?? () from /lib64/libc.so.6
#29 0x0000000000000000 in ?? ()

(gdb) fr 3
#3  0x00007f4c07440863 in afr_getxattr_node_uuid_cbk (frame=0x7f4c0a9ee0f8, cookie=0x2, this=0x214e660, op_ret=-1, op_errno=107, dict=0x0, xdata=0x0)
    at afr-inode-read.c:773
773	                AFR_STACK_UNWIND (getxattr, frame, op_ret, op_errno, dict, NULL);

(gdb) p *this
$11 = {name = 0x2143b20 "dstore-replicate-1", type = 0x2143b70 "cluster/replicate", next = 0x214d470, prev = 0x214f260, parents = 0x2151cd0, children = 0x214f120, 
  options = 0x210ddd8, dlhandle = 0x214de60, fops = 0x7f4c076ab8c0, cbks = 0x7f4c076abbc0, dumpops = 0x7f4c076abb60, volume_options = {next = 0x214f090, 
    prev = 0x214f090}, fini = 0x7f4c07494ca7 <fini>, init = 0x7f4c07493c6f <init>, reconfigure = 0x7f4c0749378c <reconfigure>, 
  mem_acct_init = 0x7f4c07493587 <mem_acct_init>, notify = 0x7f4c0749343d <notify>, loglevel = GF_LOG_NONE, latencies = {{min = 0, max = 0, total = 0, std = 0, 
      mean = 0, count = 0} <repeats 46 times>}, history = 0x0, ctx = 0x20f5010, graph = 0x213e5b0, itable = 0x0, init_succeeded = 1 '\001', private = 0x2175df0, 
  mem_acct = {num_types = 117, rec = 0x2174f20}, winds = 0, switched = 0 '\000', local_pool = 0x2176190}

(gdb) f 20
#20 0x00007f4c071ed80b in dht_getxattr (frame=0x7f4c0a9ee04c, this=0x2150b00, loc=0x7f4bfe3b1790, key=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at dht-common.c:1940
1940	                STACK_WIND (frame, dht_vgetxattr_cbk, cached_subvol,
(gdb) p this
$15 = (xlator_t *) 0x2150b00
(gdb) p this->ctx
$16 = (glusterfs_ctx_t *) 0x0
(gdb) p this->init_suceeded
There is no member named init_suceeded.
(gdb) p this->init_succeeded
$17 = 0 '\000'
(gdb) p this->itable
$18 = (inode_table_t *) 0x0
(gdb) f 19
#19 0x00007f4c07441cd2 in afr_getxattr (frame=0x7f4c0a9ee0f8, this=0x214e660, loc=0x7f4bfe3b1790, name=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at afr-inode-read.c:1027
1027	                STACK_WIND_COOKIE (frame, afr_getxattr_node_uuid_cbk,
(gdb) p this->ctx
$19 = (glusterfs_ctx_t *) 0x20f5010
(gdb) f 19
#19 0x00007f4c07441cd2 in afr_getxattr (frame=0x7f4c0a9ee0f8, this=0x214e660, loc=0x7f4bfe3b1790, name=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at afr-inode-read.c:1027
1027	                STACK_WIND_COOKIE (frame, afr_getxattr_node_uuid_cbk,
(gdb) f 18
#18 0x00007f4c076bb8ae in client_getxattr (frame=0x7f4c0a9efa80, this=0x21459d0, loc=0x7f4bfe3b1790, name=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at client.c:1461
1461	                ret = proc->fn (frame, this, &args);
(gdb) p this->ctx
$20 = (glusterfs_ctx_t *) 0x20f5010
(gdb) up
#19 0x00007f4c07441cd2 in afr_getxattr (frame=0x7f4c0a9ee0f8, this=0x214e660, loc=0x7f4bfe3b1790, name=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at afr-inode-read.c:1027
1027	                STACK_WIND_COOKIE (frame, afr_getxattr_node_uuid_cbk,
(gdb) 
#20 0x00007f4c071ed80b in dht_getxattr (frame=0x7f4c0a9ee04c, this=0x2150b00, loc=0x7f4bfe3b1790, key=0x7f4c07214f39 "trusted.glusterfs.node-uuid", xdata=0x0)
    at dht-common.c:1940
1940	                STACK_WIND (frame, dht_vgetxattr_cbk, cached_subvol,
(gdb) p this->parent
There is no member named parent.
(gdb) p *this
$21 = {name = 0x214d1a0 "dstore-dht", type = 0x21514c0 "cluster/distribute", next = 0x214feb0, prev = 0x2151e50, parents = 0x2153040, children = 0x2151c90, 
  options = 0x210ded4, dlhandle = 0x21515b0, fops = 0x7f4c0741f660, cbks = 0x7f4c0741f960, dumpops = 0x7f4c0741f900, volume_options = {next = 0x2151c40, 
    prev = 0x2151c40}, fini = 0x7f4c07212aea <fini>, init = 0x7f4c072132b8 <init>, reconfigure = 0x7f4c07212f06 <reconfigure>, 
  mem_acct_init = 0x7f4c07212c3a <mem_acct_init>, notify = 0x7f4c0721291f <notify>, loglevel = GF_LOG_NONE, latencies = {{min = 0, max = 0, total = 0, std = 0, 
      mean = 0, count = 0} <repeats 41 times>, {min = 0, max = 0, total = 0, std = 0, mean = 1.506900219815802e-321, count = 15}, {min = 0, max = 1, total = 0, 
      std = 6.9151379166971263e-310, mean = 4.9406564584124654e-324, count = 0}, {min = 0, max = 0, total = 0, std = 0, mean = 0, count = 0}, {min = 0, max = 0, 
      total = 0, std = 0, mean = 0, count = 0}, {min = 0, max = 0, total = 0, std = 0, mean = 0, count = 0}}, history = 0x0, ctx = 0x0, graph = 0x0, itable = 0x0, 
  init_succeeded = 0 '\000', private = 0x0, mem_acct = {num_types = 0, rec = 0x0}, winds = 0, switched = 0 '\000', local_pool = 0x0}
(gdb) p this->prev
$22 = (xlator_t *) 0x2151e50
(gdb) p this->prev->name
$23 = 0x2151510 "dstore-write-behind"
(gdb) p this->prev->ctx
$24 = (glusterfs_ctx_t *) 0x20f5010
(gdb) p this->prev->init_succeeded
$25 = 1 '\001'
(gdb) p this->prev->pre
There is no member named pre.
(gdb) p this->prev->prev->init_succeded
There is no member named init_succeded.
(gdb) p this->prev->prev->init_succeeded
$26 = 1 '\001'
(gdb) p this->prev->prev->prev->init_succeeded
$27 = 1 '\001'
(gdb) p this->prev->prev->prev->prev->init_succeeded
$28 = 1 '\001'
(gdb) p this->prev->prev->prev->prev->name
$29 = 0x2155340 "dstore-quick-read"
(gdb) p this->prev->prev->prev->prev->prev->name
$30 = 0x2154160 "dstore-md-cache"
(gdb) p this->prev->prev->prev->prev->prev->prev->name
$31 = 0x2156540 "dstore"
(gdb) p this->prev->prev->prev->prev->prev->prev->init_succeeded
$32 = 1 '\001'
(gdb) p this->prev->prev->prev->prev->prev->prev->type
$33 = 0x2157760 "debug/io-stats"
(gdb) p this->prev->prev->prev->prev->prev->prev->prev->type
Cannot access memory at address 0x8
(gdb) p this->prev->prev->prev->prev->prev->prev->prev
$34 = (xlator_t *) 0x0


Version-Release number of selected component (if applicable):
3.3.0qa33

Steps to Reproduce:
Steps to Reproduce:
1.create distribute-replicate volume(2X3). start the volume.
2.create fuse, nfs mounts. 
3.run gfsc1.sh from fuse mount
4.run nfsc1.sh from nfs mount
4.add-brick to the volume
5.start rebalance 
6.status rebalance
7.stop rebalance
8.brink down 2 bricks from each replicate set, so that one brick is online from
each replica set
9.brick back bricks online
10.start force rebalance
11.query rebalance status 
12.stop rebalance

Repeat step8 to step12 3-4 times.

13.stop the volume (couldn't stop the volume)
14.killall glusterfs; killall glusterfsd ; killall glusterd (caused the crash)

Actual results:
rebalance process crashed. 

Additional info:

[04/05/12 - 14:09:59 root@APP-SERVER1 ~]# gluster volume stop dstore
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
rebalance session is in progress for the volume 'dstore'

Note:- No rebalance process running. 
------------------------------------
[04/05/12 - 14:10:50 root@APP-SERVER1 ~]# ps -ef | grep gluster
root       860     1  1 Apr04 ?        00:09:53 /usr/local/sbin/glusterfsd -s localhost --volfile-id dstore.192.168.2.35.export1-dstore1 -p /etc/glusterd/vols/dstore/run/192.168.2.35-export1-dstore1.pid -S /tmp/51384641571d6ca8d57c5298df75ce3d.socket --brick-name /export1/dstore1 -l /usr/local/var/log/glusterfs/bricks/export1-dstore1.log --xlator-option *-posix.glusterd-uuid=b5db1d60-817b-44b7-93d5-5c371e02613e --brick-port 24009 --xlator-option dstore-server.listen-port=24009
root       866     1  1 Apr04 ?        00:12:17 /usr/local/sbin/glusterfsd -s localhost --volfile-id dstore.192.168.2.35.export2-dstore1 -p /etc/glusterd/vols/dstore/run/192.168.2.35-export2-dstore1.pid -S /tmp/a6ebe4aad4f6f0674f836131b7dcfa82.socket --brick-name /export2/dstore1 -l /usr/local/var/log/glusterfs/bricks/export2-dstore1.log --xlator-option *-posix.glusterd-uuid=b5db1d60-817b-44b7-93d5-5c371e02613e --brick-port 24010 --xlator-option dstore-server.listen-port=24010
root       872     1  1 Apr04 ?        00:10:19 /usr/local/sbin/glusterfsd -s localhost --volfile-id dstore.192.168.2.35.export1-dstore2 -p /etc/glusterd/vols/dstore/run/192.168.2.35-export1-dstore2.pid -S /tmp/f0a7ce65bc8dcddd005f83b02eb3189f.socket --brick-name /export1/dstore2 -l /usr/local/var/log/glusterfs/bricks/export1-dstore2.log --xlator-option *-posix.glusterd-uuid=b5db1d60-817b-44b7-93d5-5c371e02613e --brick-port 24011 --xlator-option dstore-server.listen-port=24011
root       878     1  1 Apr04 ?        00:11:29 /usr/local/sbin/glusterfsd -s localhost --volfile-id dstore.192.168.2.35.export2-dstore2 -p /etc/glusterd/vols/dstore/run/192.168.2.35-export2-dstore2.pid -S /tmp/20df53803885692fb059928e1acb93ed.socket --brick-name /export2/dstore2 -l /usr/local/var/log/glusterfs/bricks/export2-dstore2.log --xlator-option *-posix.glusterd-uuid=b5db1d60-817b-44b7-93d5-5c371e02613e --brick-port 24012 --xlator-option dstore-server.listen-port=24012
root       885     1  2 Apr04 ?        00:25:46 /usr/local/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /etc/glusterd/nfs/run/nfs.pid -l /usr/local/var/log/glusterfs/nfs.log -S /tmp/931cbfa07a099b2cd706de783da6a5ff.socket
root       891     1  0 Apr04 ?        00:00:04 /usr/local/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /etc/glusterd/glustershd/run/glustershd.pid -l /usr/local/var/log/glusterfs/glustershd.log -S /tmp/24ba6b3a4aeca8cc932c568c0638caba.socket
root      3311  3282  0 14:10 pts/0    00:00:00 grep gluster
root     32619     1  0 Apr04 ?        00:00:00 glusterd

Comment 1 Shwetha Panduranga 2012-04-05 07:51:59 UTC
Attaching scripts to run on fuse, nfs mounts:-

gfsc1.sh:-
-----------
#!/bin/bash

mountpoint=`pwd`
for i in {1..10}
do
 level1_dir=$mountpoint/fuse2.$i
 mkdir $level1_dir
 cd $level1_dir
 for j in {1..20}
 do 
  level2_dir=dir.$j
  mkdir $level2_dir
  cd $level2_dir
  for k in {1..100}
  do 
   echo "Creating File: $leve1_dir/$level2_dir/file.$k"
   dd if=/dev/zero of=file.$k bs=1M count=$k 
  done
  cd $level1_dir
 done
 cd $mountpoint
done


nfsc1.sh:-
----------
#!/bin/bash

mountpoint=`pwd`
for i in {1..5}
do 
 level1_dir=$mountpoint/nfs2.$i
 mkdir $level1_dir
 cd $level1_dir
 for j in {1..20}
 do 
  level2_dir=dir.$j
  mkdir $level2_dir
  cd $level2_dir

  for k in {1..100}
  do 
   echo "Creating File: $leve1_dir/$level2_dir/file.$k"
   dd if=/dev/zero of=file.$k bs=1M count=$k

  done
  cd $level1_dir
 done
 cd $mountpoint
done

Comment 2 Anand Avati 2012-04-17 13:26:49 UTC
CHANGE: http://review.gluster.com/3156 (cluster/dht: Handle failures in getxattr_cbk) merged in master by Vijay Bellur (vijay)

Comment 3 Shwetha Panduranga 2012-05-12 14:00:09 UTC
Bug is fixed . Verified on 3.3.0qa41