Bug 765330 (GLUSTER-3598)

Summary: Replace-brick fails if you start it simultaneously for two different volumes
Product: [Community] GlusterFS Reporter: Vijaykumar <vijaykumar>
Component: coreAssignee: Amar Tumballi <amarts>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 3.3-betaCC: gluster-bugs, kparthas, vraman
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: DP CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description krishnan parthasarathi 2011-09-21 04:13:01 UTC
When concurrent volume operations are issued to glusterd, atmost one of the operations succeeds and the rest of them fail. This is so, to avoid any inconsistencies in the cluster configuration. This is not specific to replace-brick operations.

Comment 1 Vijaykumar 2011-09-21 07:02:15 UTC
I have a distributed-replicate and a distributed-stripe volume , If i start replace-brick for both the volumes simultaneously, the one which is started first will succeed, but the other one will fail.