Bug 770554

Summary: [glusterfs-3.3.0qa18]: client hung in stat call
Product: [Community] GlusterFS Reporter: Raghavendra Bhat <rabhat>
Component: replicateAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED DUPLICATE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: gluster-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-12-29 05:27:43 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Raghavendra Bhat 2011-12-27 11:52:51 UTC
Description of problem:
In a 2x2 distributed replicate volume, killed all glusterfs glusterfsd and glusterd's off all the peers. Started glusterd on one of the servers (thus starting glusterfsd's) and using that server mounted the client. On the client stat call hung. This is what statedump says.

[global.callpool]
callpool_address=0x111afa0
callpool.cnt=2

[global.callpool.stack.1]
uid=0
gid=0
pid=20987
unique=22
op=STAT
type=1
cnt=12

[global.callpool.stack.1.frame.1]
ref_count=1
translator=fuse
complete=0

[global.callpool.stack.1.frame.2]
ref_count=0
translator=mirror-replicate-1
complete=0
parent=mirror-dht
wind_from=dht_stat
wind_to=subvol->fops->stat
unwind_to=dht_attr_cbk

[global.callpool.stack.1.frame.3]
ref_count=0
translator=mirror-client-0
complete=1
parent=mirror-replicate-0
wind_from=afr_stat
wind_to=children[call_child]->fops->stat
unwind_from=client3_1_stat_cbk
unwind_to=afr_stat_cbk

[global.callpool.stack.1.frame.4]
ref_count=0
translator=mirror-replicate-0
complete=1
parent=mirror-dht
wind_from=dht_stat
wind_to=subvol->fops->stat
unwind_from=afr_stat_cbk
unwind_to=dht_attr_cbk

[global.callpool.stack.1.frame.5]
ref_count=1
translator=mirror-dht
complete=0
parent=mirror-quota
wind_from=quota_stat
wind_to=FIRST_CHILD(this)->fops->stat
unwind_to=quota_stat_cbk

[global.callpool.stack.1.frame.6]
ref_count=1
translator=mirror-quota
complete=0
parent=mirror-write-behind
wind_from=wb_stat
wind_to=FIRST_CHILD(this)->fops->stat
unwind_to=wb_stat_cbk

[global.callpool.stack.1.frame.7]
ref_count=1
translator=mirror-write-behind
complete=0
parent=mirror-read-ahead
wind_from=default_stat
wind_to=FIRST_CHILD(this)->fops->stat
unwind_to=default_stat_cbk

[global.callpool.stack.1.frame.8]
ref_count=1
translator=mirror-read-ahead
complete=0
parent=mirror-io-cache
wind_from=default_stat
wind_to=FIRST_CHILD(this)->fops->stat
unwind_to=default_stat_cbk

[global.callpool.stack.1.frame.9]
ref_count=1
translator=mirror-io-cache
complete=0
parent=mirror-quick-read
wind_from=default_stat
wind_to=FIRST_CHILD(this)->fops->stat
unwind_to=default_stat_cbk

[global.callpool.stack.1.frame.10]
ref_count=1
translator=mirror-quick-read
complete=0
parent=mirror-stat-prefetch
wind_from=sp_stat
wind_to=FIRST_CHILD(this)->fops->stat
unwind_to=sp_stbuf_cbk

[global.callpool.stack.1.frame.11]
ref_count=1
translator=mirror-stat-prefetch
complete=0
parent=mirror
wind_from=io_stats_stat
wind_to=FIRST_CHILD(this)->fops->stat
unwind_to=io_stats_stat_cbk

[global.callpool.stack.1.frame.12]
ref_count=1
translator=mirror
complete=0
parent=fuse
wind_from=fuse_getattr_resume
wind_to=xl->fops->stat
unwind_to=fuse_attr_cbk

[global.callpool.stack.2]
uid=0
gid=0
pid=0
unique=0
type=0
cnt=1

[global.callpool.stack.2.frame.1]
ref_count=0
translator=glusterfs
complete=0



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Pranith Kumar K 2011-12-29 05:27:43 UTC
stat call did not wund to any client in mirror-replica-1. THis is same as 770513
Marking as duplicate.

*** This bug has been marked as a duplicate of bug 770513 ***