Bug 764038 (GLUSTER-2306)

Summary: basic replace-brick functionality unavailable
Product: [Community] GlusterFS Reporter: tcp
Component: distributeAssignee: Anand Avati <aavati>
Status: CLOSED NOTABUG QA Contact:
Severity: medium Docs Contact:
Priority: high    
Version: 3.1.2CC: amarts, chrisw, gluster-bugs, pkarampu, sgowda, vijay
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Amar Tumballi 2011-01-21 07:20:07 UTC
please do

root@comrade:~/work/glusterfs-latest/glusterfs-3.1.2# gluster volume
replace-brick testvol comrade:/export/dir4 comrade:/export/dir5 commit

NOTE: check the 'commit' at the end

after the 'status' says everything is complete.

Comment 1 tcp 2011-01-21 09:10:48 UTC
I hit this bug while I was trying to recreate Bug 2107.
I did a fresh install of 3.1.2 and created a DHT volume with 4 bricks.
I then started the volume. Soon after that, I did a replace brick via the gluster CLI. It completed with a success message. I followed it with a volume info. The info did not reflect the success. I then tried to replace back the earlier brick, and this errored out.

Please see the command outputs below:

root@comrade:~# gluster volume info
No volumes present

root@comrade:~# gluster volume create testvol comrade:/export/dir1 comrade:/export/dir2 comrade:/export/dir3 comrade:/export/dir4
Creation of volume testvol has been successful. Please start the volume to access data.
root@comrade:~# gluster volume start testvol
Starting volume testvol has been successful

root@comrade:~/work/glusterfs-latest/glusterfs-3.1.2# gluster volume replace-brick testvol comrade:/export/dir4 comrade:/export/dir5 start
replace-brick started successfully

root@comrade:~/work/glusterfs-latest/glusterfs-3.1.2# gluster volume info

Volume Name: testvol
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: comrade:/export/dir1
Brick2: comrade:/export/dir2
Brick3: comrade:/export/dir3
Brick4: comrade:/export/dir4

root@comrade:~/work/glusterfs-latest/glusterfs-3.1.2# gluster volume replace-brick testvol comrade:/export/dir4 comrade:/export/dir5 status
Number of files migrated = 0       Current file=

root@comrade:~/work/glusterfs-latest/glusterfs-3.1.2# gluster volume replace-brick testvol comrade:/export/dir5 comrade:/export/dir4 start
replace-brick failed to start

Some errors in the log file (thanks to shishir for directing me to these) -

...
...

[2011-01-21 12:56:37.235472] E [glusterd-op-sm.c:1102:glusterd_op_stage_replace_brick] : replace brick: incorrect source or  destination bricks specified
[2011-01-21 12:56:37.235491] E [glusterd3_1-mops.c:1202:glusterd3_1_stage_op] : Staging failed
[2011-01-21 12:56:37.235501] E [glusterd-op-sm.c:5410:glusterd_op_sm] glusterd: handler returned: -1
...
...


There are actually two bugs here -

1. The replace-brick command should not give a success message if there was actually a failure.

2. The validation of source/destination brick seems to be incorrect.

Comment 2 tcp 2011-01-21 11:01:06 UTC
My bad.
I was misled by the man-page which did not show the commit option, and I missed all about fix for bug 764005.

Can be closed as "Resolved/Dumb user".