REVIEW: http://review.gluster.org/5997 (cli: runtime in xml output of rebalance/remove-brick status) posted (#2) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/5997 (cli: runtime in xml output of rebalance/remove-brick status) posted (#3) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/5997 (cli: runtime in xml output of rebalance/remove-brick status) posted (#4) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/5997 (cli: runtime in xml output of rebalance/remove-brick status) posted (#5) for review on master by Aravinda VK (avishwan)
COMMIT: http://review.gluster.org/5997 committed in master by Anand Avati (avati) ------ commit 7dba6f9b556288a95d6ca7e9c3222d14cae3def5 Author: Aravinda VK <avishwan> Date: Tue Sep 24 12:41:30 2013 +0530 cli: runtime in xml output of rebalance/remove-brick status "runtime in secs" is available in the CLI output of rebalance status and remove-brick status, but not available in xml output when --xml is passed. runtime in aggregate section will be max of all nodes runtimes. Example output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <runtime>1.00</runtime> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <runtime>1.00</runtime> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> BUG: 1012773 Change-Id: I8deaba08922a53cd2d3b411e097a7b3cf591b127 Signed-off-by: Aravinda VK <avishwan> Reviewed-on: http://review.gluster.org/5997 Reviewed-by: Kaushal M <kaushal> Tested-by: Gluster Build System <jenkins.com>
REVIEW: http://review.gluster.org/6118 (cli: runtime in xml output of rebalance/remove-brick status) posted (#1) for review on release-3.4 by Aravinda VK (avishwan)
COMMIT: http://review.gluster.org/6118 committed in release-3.4 by Vijay Bellur (vbellur) ------ commit 45d6c6ba540beaaab2fd9d2703ef8b2ce0da0454 Author: Aravinda VK <avishwan> Date: Tue Oct 22 14:00:20 2013 +0530 cli: runtime in xml output of rebalance/remove-brick status "runtime in secs" is available in the CLI output of rebalance status and remove-brick status, but not available in xml output when --xml is passed. runtime in aggregate section will be max of all nodes runtimes. Example output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volRebalance> <op>3</op> <nodeCount>1</nodeCount> <node> <nodeName>localhost</nodeName> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <runtime>1.00</runtime> <status>3</status> <statusStr>completed</statusStr> </node> <aggregate> <files>0</files> <size>0</size> <lookups>0</lookups> <failures>0</failures> <skipped>0</skipped> <runtime>1.00</runtime> <status>3</status> <statusStr>completed</statusStr> </aggregate> </volRebalance> </cliOutput> BUG: 1012773 Change-Id: I6de59d4ed03983b6ffc014d6a331251cf635a690 Signed-off-by: Aravinda VK <avishwan> Reviewed-on: http://review.gluster.org/5997 Reviewed-by: Kaushal M <kaushal> Reviewed-on: http://review.gluster.org/6118 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.4.3, please reopen this bug report. glusterfs-3.4.3 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.4.3. In the same line the recent release i.e. glusterfs-3.5.0 [3] likely to have the fix. You can verify this by reading the comments in this bug report and checking for comments mentioning "committed in release-3.5". [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978 [2] http://news.gmane.org/gmane.comp.file-systems.gluster.user [3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137