+++ This bug was initially created as a clone of Bug #1034116 +++ Description of problem: rebalance status output shows run time in sec which is hard to read when no. is big e.g [root@7-VM1 core]# gluster volume rebalance flat status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 832000 13.7GB 5344344 1 228 failed 159836.00 10.70.36.133 1009405 15.7GB 5362837 2 206 failed 159836.00 10.70.36.132 823206 12.9GB 5416604 1 233 failed 159836.00 10.70.36.131 0 0Bytes 5227829 0 0 failed 159836.00 volume rebalance: flat: success: It's better to show output as '1 day, XX:XX:XX' or "Hour:min:sec" then '5227829' Version-Release number of selected component (if applicable): ============================================ 3.4.0.44rhs-1.el6rhs.x86_64 Actual results: run time is always displayed in sec. Expected results: Hour:min:Sec format is better than only seconds Additional info:
REVIEW: http://review.gluster.org/10544 (dht: Output of rebalance to show run time in proper format) posted (#3) for review on master by Sakshi Bansal (sabansal)
REVIEW: http://review.gluster.org/10544 (cli : output of rebalance to show run time in proper format) posted (#6) for review on master by Sakshi Bansal
REVIEW: http://review.gluster.org/10544 (cli: output of rebalance to show run time in proper format) posted (#7) for review on master by Sakshi Bansal
COMMIT: http://review.gluster.org/10544 committed in master by Raghavendra G (rgowdapp) ------ commit 77245bcbf02754dec832ca34a9138bade2c9cfa3 Author: Sakshi <sabansal> Date: Tue May 5 10:55:56 2015 +0530 cli: output of rebalance to show run time in proper format Change-Id: I775f13c8046dd2aeb9d4b86a737dcebb396778b4 BUG: 1223625 Signed-off-by: Sakshi Bansal <sabansal> Reviewed-on: http://review.gluster.org/10544 Smoke: Gluster Build System <jenkins.com> CentOS-regression: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: N Balachandran <nbalacha> Reviewed-by: Raghavendra G <rgowdapp>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user