Description of problem: ======================== After executing rebalance operation on the volume , add some more bricks and check rebalance status . It shows 'not started' but shows the files that were rebalanced in the previous operation . Version-Release number of selected component (if applicable): ============================================================ 3.4.0.12rhs.beta3-1.el6rhs.x86_64 How reproducible: Steps to Reproduce: =================== 1.Create a distributed volume 2.Add 2 bricks and start rebalance 3.Check rebalance status gluster v rebalance vol_11 status Node Rebalanced-files size scanned failures status run time in secs localhost 28 280.0MB 305 0 completed 9.00 10.70.34.85 26 260.0MB 278 0 completed 9.00 10.70.34.86 40 400.0MB 344 0 completed 10.00 4. Add 2 more bricks 5. Check rebalance status (with out starting another rebalabce operation) gluster v rebalance vol_11 status Node Rebalanced-files size scanned failures status run time in secs localhost 28 280.0MB 305 0 not started 9.00 10.70.34.85 26 260.0MB 278 0 not started 9.00 10.70.34.86 40 400.0MB 344 0 not started 10.00 Actual results: ============== Status shows not started where as rebalanced files shows the no of files rebalanced in previous operation Expected results: ================ If Status shows 'not started' , then the other parameters like rebalanced files , size , scanned and run time should show '0'. Ideally if a new rebalance operation has not been started , the status should still show the status of the previous rebalance operation Additional info:
sosreports at : http://rhsqe-repo.lab.eng.blr.redhat.com/bugs_necessary_info/982104/
Another related issue, When gluster rebalance is completed, and a brick added to volume after completion, the gluster volume status all gives incorrect output: [root@localhost ~]# gluster volume status all Status of volume: dv1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.152:/brcks/dvb1 49156 Y 11341 Brick 10.70.42.152:/brcks/dvb2 49157 Y 11351 Brick 10.70.42.152:/brcks/dvb3 49158 Y 27642 NFS Server on localhost 2049 Y 27652 Task ID Status ---- -- ------ Rebalance 4a96c34d-fe5e-48b3-b349-80621abb85f3 0 Here, that taskid was for the previous rebalance operation. The status is incorrectly shown as "Not Started" This affects the task monitoring in RHSC. Can this be fixed, please?
https://code.engineering.redhat.com/gerrit/#/c/13661/
RHSC has dependency on this bug. Hence moving the priority to High
Version : glusterfs 3.4.0.52rhs ======= Repeated the steps as mentioned in Steps to reproduce . After adding brick once rebalance is completed and checking Rebalance Status shows the below output: [root@jay tmp]# gluster v add-brick vol3 10.70.34.89:/rhs/brick1/c7 10.70.34.87:/rhs/brick1/c8 volume add-brick: success [root@jay tmp]# gluster v rebalance vol3 status Node Rebalanced-files size scanned failures skipped status run time in secs ---- -------------- ------ -------- -------- ------- ------ ---------------- localhost 0 0Bytes 53 0 2 completed 0.00 10.70.34.88 0 0Bytes 54 0 2 completed 0.00 10.70.34.87 15 15.0MB 60 0 0 completed 1.00 10.70.34.89 6 6.0MB 61 0 0 completed 0.00 volume rebalance: vol3: success: Checking gluster volume status all shows the Rebalance Task is "Completed" with the Task ID same as when Rebalance Operation was started [root@jay tmp]# gluster v rebalance vol4 start volume rebalance: vol4: success: Starting rebalance on volume vol4 has been successful. ID: ffc8ce22-6dc4-43a3-b9df-b30d54e18646 Status of volume: vol4 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.34.86:/rhs/brick1/d1 49159 Y 14375 Brick 10.70.34.87:/rhs/brick1/d2 49166 Y 7937 Brick 10.70.34.88:/rhs/brick1/d3 49155 Y 8617 Brick 10.70.34.89:/rhs/brick1/d4 49160 Y 22961 Brick 10.70.34.87:/rhs/brick1/d5 49167 Y 7973 Brick 10.70.34.89:/rhs/brick1/d6 49161 Y 23005 Brick 10.70.34.87:/rhs/brick1/d7 49168 Y 8021 NFS Server on localhost 2049 Y 14535 NFS Server on 10.70.34.88 2049 Y 8693 NFS Server on 10.70.34.87 2049 Y 8039 NFS Server on 10.70.34.89 2049 Y 23017 Task Status of Volume vol4 ------------------------------------------------------------------------------ Task : Rebalance ID : ffc8ce22-6dc4-43a3-b9df-b30d54e18646 Status : completed Marking the bug as 'Verified'
Can you please verify if the doc text is technically correct?
The doc text looks fine.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html