Bug 1210193 - Commands hanging on the client post recovery of failed bricks
Summary: Commands hanging on the client post recovery of failed bricks
Keywords:
Status: CLOSED DUPLICATE of bug 1205709
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-09 07:22 UTC by Anoop
Modified: 2023-09-14 02:57 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-09 17:33:40 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Anoop 2015-04-09 07:22:43 UTC
Description of problem:

Have following volume configuration:

Volume Name: vol1
Type: Distributed-Disperse
Volume ID: 44c0b7fa-62b6-4704-9819-57f1aac3c168
Status: Started
Number of Bricks: 2 x (4 + 2) = 12

Now, if I fail more than supported number of bricks  and recover them back, I see that the operations (like ls)from the clients hang. There is not way to get out of this state. 

Version-Release number of selected component (if applicable):

glusterfs-3.7dev-0.885.git0d36d4f.el6.x86_64

How reproducible:

1. Create a dist. disperse
2. Mount it to a client and start I/O.
3. Failed multiple bricks and bring is back online
4. Do "ls" on the mount 


Actual results:

Mount hangs on the client

Expected results:

No hang

Additional info:

Comment 1 Soumya Koduri 2015-04-14 12:27:29 UTC
Is it always reproducible? 
Can you please post the brick/client logs. Thanks.

Comment 2 Pranith Kumar K 2015-05-09 17:33:40 UTC

*** This bug has been marked as a duplicate of bug 1205709 ***

Comment 3 Red Hat Bugzilla 2023-09-14 02:57:47 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.