Description of problem: ------------------------------ After a fix layout is done on the volume , when rebalance status message is checked , it just mentions 'Completed' . It would be better if the status message clearly mentions 'fix-layout completed' because the same message is displayed when a rebalance task is completed on the volume , so it might be confusing for the user if fix-layout is completed or rebalance is completed on the volume Version-Release number of selected component (if applicable): --------------------------------------------------------------- 3.4.0.1rhs-1.el6rhs.x86_64 How reproducible: ------------------ Always Steps to Reproduce: ----------------------- 1.Create a distribute/distribute-replicate volume and start it 2.Mount the volume and create some files 3.Add a brick and start fix-layout gluster v rebalance dis_rep2 fix-layout start volume rebalance: dis_rep2: success: Starting rebalance on volume dis_rep2 has been successful. ID: 69796569-1f8e-4753-a03c-c3ecfeacbbd0 4.Check rebalance status gluster v rebalance dis_rep2 status Node Rebalanced-files size scanned failures status run time in secs ------ ----------------- ----- ------- -------- ------- --------------- localhost 0 0Bytes 0 0 completed 0.00 10.70.34.86 0 0Bytes 0 0 completed 0.00 10.70.34.85 0 0Bytes 0 0 completed 0.00 volume rebalance: dis_rep2: success: Actual results: ---------------------- Status message shows only 'Completed' Expected results: --------------------- It would be better if the status message clearly mentioned 'fix-layout completed ' Additional info:
Downstream review URL:https://code.engineering.redhat.com/gerrit/#/c/13120/
Version : After doing a fix layout and checking the rebalance status , it shows 'fix-layout completed'. But after restarting glusterd, and checking rebalance status, now it shows 'fix-layout in progress' even though there is no change in the layout. Moving the bug to 'Assigned' Steps followed : -------------- 1) Created a distribute volume with 3 bricks 2) Fuse mount the volume and create some files 3) Add 2 more bricks and perform fix-layout [root@boost brick1]# gluster v rebalance vol1 fix-layout start volume rebalance: vol1: success: Starting rebalance on volume vol1 has been successful. ID: efe4626f-80db-4a73-9ae8-732848b9fa75 check status gluster v rebalance vol1 status Node Rebalanced-files size scanned failures skipped status run time in secs ---- ---------------- ---- ------- -------- ------- ------ ---------------- localhost 0 0Bytes 0 0 0 fix-layout completed 0.00 10.70.34.88 0 0Bytes 0 0 0 fix-layout completed 0.00 10.70.34.86 0 0Bytes 0 0 0 fix-layout completed 0.00 volume rebalance: vol1: success: 4)Restart glusterd [root@boost brick1]# service glusterd stop Stopping glusterd: [ OK ] [root@boost brick1]# service glusterd start Starting glusterd: [ OK ] 5) Check rebalance status now , gluster v rebalance vol1 status Node Rebalanced-files size scanned failures skipped status run time in secs ---- ---------------- ---- ------- -------- ------- ------ ---------------- localhost 0 0Bytes 0 0 0 fix-layout in progress 0.00 10.70.34.88 0 0Bytes 0 0 0 fix-layout in progress 0.00 10.70.34.86 0 0Bytes 0 0 0 fix-layout in progress 0.00 volume rebalance: vol1: success: Status shows that fix-layout is in progress even though there is no change in the layout . [root@boost brick1]# gluster v i vol1 Volume Name: vol1 Type: Distribute Volume ID: f925dc41-f28c-4619-a917-9c41351642b4 Status: Started Number of Bricks: 5 Transport-type: tcp Bricks: Brick1: 10.70.34.86:/rhs/brick1/b1 Brick2: 10.70.34.88:/rhs/brick1/b2 Brick3: 10.70.34.85:/rhs/brick1/b3 Brick4: 10.70.34.86:/rhs/brick1/b4 Brick5: 10.70.34.88:/rhs/brick1/b5
Missed specifying the version in Comment 4: glusterfs 3.4.0.44.1u2rhs
Some observations: 1) Able to reproduce issue in comment #4. However the status field seems to be transient and "settles down" to 'fix-layout completed' when the command is run again. ===================================================== [root@gatekeeper testvol]# service glusterd stop [root@gatekeeper testvol]# [ OK ] [root@gatekeeper testvol]# service glusterd start Starting glusterd: [ OK ] [root@gatekeeper testvol]# [root@gatekeeper testvol]# [root@gatekeeper testvol]# gluster v rebalance testvol status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 fix-layout in progress 0.00 10.70.42.194 0 0Bytes 0 0 0 fix-layout in progress 0.00 10.70.42.203 0 0Bytes 0 0 0 fix-layout completed 0.00 10.70.42.251 0 0Bytes 0 0 0 fix-layout in progress 0.00 volume rebalance: testvol: success: [root@gatekeeper testvol]# [root@gatekeeper testvol]# [root@gatekeeper testvol]# gluster v rebalance testvol status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 fix-layout completed 1.00 10.70.42.194 0 0Bytes 0 0 0 fix-layout completed 1.00 10.70.42.203 0 0Bytes 0 0 0 fix-layout completed 0.00 10.70.42.251 0 0Bytes 0 0 0 fix-layout completed 1.00 volume rebalance: testvol: success: [root@gatekeeper testvol]# ============================================== 2) Also able to observe the same behaviour with a full rebalance where status changes from 'in progess' to 'completed': ============================================ [root@tuxvm2 ~]# service glusterd stop [root@tuxvm2 ~]# service glusterd start Starting glusterd: [ OK ] [root@tuxvm2 ~]# gluster v rebalance testvol status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 in progress 0.00 10.70.42.194 0 0Bytes 0 0 0 in progress 0.00 10.70.40.111 0 0Bytes 0 0 0 in progress 0.00 10.70.42.203 0 0Bytes 0 0 0 completed 0.00 volume rebalance: testvol: success: [root@tuxvm2 ~]# gluster v rebalance testvol status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 0 0 0 completed 0.00 10.70.42.194 0 0Bytes 0 0 0 completed 0.00 10.70.40.111 0 0Bytes 0 0 0 completed 0.00 10.70.42.203 0 0Bytes 0 0 0 completed 0.00 volume rebalance: testvol: success: [root@tuxvm2 ~]# =================================== A separate bug probably needs to be filed for this.
Hi Ravishankar , Bug has been raised : https://bugzilla.redhat.com/show_bug.cgi?id=1040345
As per comment 4 , after restarting glusterd and checking rebalance status it shows that fix layout is still in progress when there is no layout change happening . Moving the bug back to 'Assigned'
adding 3.0 flag and removing 2.1.z