Bug 765330 (GLUSTER-3598) - Replace-brick fails if you start it simultaneously for two different volumes
Summary: Replace-brick fails if you start it simultaneously for two different volumes
Keywords:
Status: CLOSED WONTFIX
Alias: GLUSTER-3598
Product: GlusterFS
Classification: Community
Component: core
Version: 3.3-beta
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Amar Tumballi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-21 07:02 UTC by Vijaykumar
Modified: 2013-12-19 00:06 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: DP
CRM:
Verified Versions:


Attachments (Terms of Use)

Description krishnan parthasarathi 2011-09-21 04:13:01 UTC
When concurrent volume operations are issued to glusterd, atmost one of the operations succeeds and the rest of them fail. This is so, to avoid any inconsistencies in the cluster configuration. This is not specific to replace-brick operations.

Comment 1 Vijaykumar 2011-09-21 07:02:15 UTC
I have a distributed-replicate and a distributed-stripe volume , If i start replace-brick for both the volumes simultaneously, the one which is started first will succeed, but the other one will fail.


Note You need to log in before you can comment on or make changes to this bug.