Bug 822338 - Replace-brick status fails after the start operation.
Replace-brick status fails after the start operation.
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: krishnan parthasarathi
Vijaykumar Koppad
Depends On:
Blocks: 817967
  Show dependency treegraph
Reported: 2012-05-17 02:17 EDT by Vijaykumar Koppad
Modified: 2015-11-03 18:04 EST (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-07-24 13:46:33 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions: 3.3.0qa42
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2012-05-17 02:17:41 EDT
Description of problem: With a distribute volume , if you do any replace-brick operation after the  replace-brick start will fail saying brick or a prefix of it is already part of a volume. 

Version-Release number of selected component (if applicable):

How reproducible:Always 

Steps to Reproduce:
1.Create a distribute volume.
2.Start a replace-brick with one of the bricks.
3.The run replace-brick status.
Actual results:It is failing 

Expected results:It shouldn't fail 

Additional info:

Volume Name: doa
Type: Distribute
Volume ID: c6da8c1c-bc21-4ac5-bf9a-59b7cdb2ba1c
Status: Started
Number of Bricks: 4
Transport-type: tcp

gluster --mode=script volume replace-brick doa status 
/root/bricks/doa/d5 or a prefix of it is already part of a volume
Comment 1 Anand Avati 2012-05-19 05:35:55 EDT
CHANGE: http://review.gluster.com/3354 (glusterd: replace-brick should create dst brick path only once.) merged in master by Vijay Bellur (vijay@gluster.com)

Note You need to log in before you can comment on or make changes to this bug.