Bug 817501 - replace-brick status operation reports wrong message
replace-brick status operation reports wrong message
Status: CLOSED WORKSFORME
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: krishnan parthasarathi
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-04-30 05:56 EDT by Shwetha Panduranga
Modified: 2015-11-03 18:04 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-06-19 04:32:26 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
glusterd logs (470.69 KB, text/x-log)
2012-04-30 05:56 EDT, Shwetha Panduranga
no flags Details

  None (edit)
Description Shwetha Panduranga 2012-04-30 05:56:54 EDT
Created attachment 581164 [details]
glusterd logs

Description of problem:
---------------------------
replace-brick start reports start successful and a subsequent status operation reports "replace-brick status unknown" message. 

glusterd log:-
----------------
[2012-04-30 14:57:49.325640] D [glusterd-replace-brick.c:708:rb_spawn_glusterfs_client] 0-: stat on mountpoint succeeded

[2012-04-30 14:57:49.325704] D [glusterd-replace-brick.c:1070:rb_get_xattr_command] 0-management: getxattr on key: glusterfs.pump.status failed

[2012-04-30 14:57:49.325739] D [glusterd-replace-brick.c:1194:rb_do_operation] 0-management: Sending replace-brick sub-command status failed.

[2012-04-30 14:57:49.356754] D [glusterd-op-sm.c:3039:glusterd_op_commit_perform] 0-: Returning -1

[2012-04-30 14:57:49.356813] E [glusterd-op-sm.c:2324:glusterd_op_ac_send_commit_op] 0-management: Commit failed

command execution output:-
--------------------------------
[04/30/12 - 14:33:21 root@APP-SERVER1 ~]# gluster volume replace-brick dstore 192.168.2.35:/export1/dstore1 192.168.2.35:/export2/dstore1 start
replace-brick started successfully
[04/30/12 - 14:33:40 root@APP-SERVER1 ~]# gluster volume replace-brick dstore 192.168.2.35:/export1/dstore1 192.168.2.35:/export2/dstore1 status
replace-brick status unknown
[04/30/12 - 14:33:43 root@APP-SERVER1 ~]# gluster volume replace-brick dstore 192.168.2.35:/export1/dstore1 192.168.2.35:/export2/dstore1 status
replace-brick status unknown


Version-Release number of selected component (if applicable):
mainline

How reproducible:
often

create_dirs.sh:-
----------------------
#!/bin/bash

mountpoint=`pwd`
main_dir="$mountpoint/deep_dirs"

mkdir $main_dir
cd $main_dir

for i in {1..25}; do
	level1_dir="$main_dir/l1_dir.$i"
	echo "creating directory: $level1_dir"
	mkdir $level1_dir
	cd $level1_dir
	for j in {1..12};do
		level2_dir="$level1_dir/l2.dir.$j"
		echo "creating directory: $level2_dir"
		mkdir $level2_dir
		cd $level2_dir
		for k in {1..5};do
			echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
			echo "creating file: file.$k"
			dd if=/dev/zero of=file.$k bs=1M count=$k
		done
		cd $level1_dir
	done
	cd $main_dir
done
cd $mountpoint

Steps to Reproduce:
-------------------
1.create a distribute volume with 1 brick.(brick1) 

2.create a fuse mount. run "create_dirs.sh"

3.after successfully executing the above mentioned script, execute:
"gluster volume add-brick <volume_name> replica 2 <brick2>" to make it a replicate volume

4.execute "ls -lR" from fuse mount to self-heal the files.

5.Once the self-heal is complete, bring down 'brick2' (newly added brick)

6.execute: "gluster volume replace-brick dstore <brick1> <brick3> start"

7.execute: "gluster volume replace-brick dstore <brick1> <brick3> status"
  
Actual results:
-----------------
replace-brick status unknown

Expected results:
--------------------
Number of files migrated = <number_migrated>        Migration <status>

Additional info:
-------------------
[04/30/12 - 15:08:53 root@APP-SERVER1 ~]# gluster volume info
 
Volume Name: dstore
Type: Replicate
Volume ID: 16574147-566b-4644-8ac5-3ad27f1baaf7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.2.35:/export1/dstore1
Brick2: 192.168.2.36:/export1/dstore1
Options Reconfigured:
cluster.self-heal-daemon: off
performance.stat-prefetch: off

Note You need to log in before you can comment on or make changes to this bug.