Bug 1218942 - Rebalance Status output lists an extra colon " : " after volume rebalance: <vol_name>: success:
Summary: Rebalance Status output lists an extra colon " : " after volume rebalance: <...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Sakshi
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard: dht-try-latest-build
: 1286080 1286082 (view as bug list)
Depends On:
Blocks: 1308782
TreeView+ depends on / blocked
 
Reported: 2015-05-06 09:02 UTC by Sakshi
Modified: 2016-08-01 01:22 UTC (History)
6 users (show)

Fixed In Version: 3.7.5-19
Doc Type: Bug Fix
Doc Text:
Clone Of: 962420
: 1308782 (view as bug list)
Environment:
Last Closed: 2016-06-03 09:51:17 UTC
Embargoed:


Attachments (Terms of Use)

Description Sakshi 2015-05-06 09:02:09 UTC
+++ This bug was initially created as a clone of Bug #962420 +++

Description of problem:
========================== 
Rebalance Status output lists an extra colon " : " after  volume rebalance: <vol_name>: success: 

Version-Release number of selected component (if applicable):
============================================================== 
3.4.0.6rhs-1.el6rhs.x86_64

How reproducible:
=================== 
always

Steps to Reproduce:
=========================
1.Perform rebalance operation and check the rebalance status 
gluster volume rebalance vol-dis-rep status

Node  Rebalanced-files  size    scanned  failures status run time in secs
  
localhost     199      1.1MB     569      0      in progress   406.00
10.70.36.37   201      7.6MB     666      0      in progress   406.00
10.70.36.36   0        0Bytes    60948    0      in progress   406.00
10.70.36.38   0        0Bytes    60948    0      in progress   406.00
volume rebalance: vol-dis-rep: success:

The extra colon (:) at the end of command output after 'success' should not be present

Example of other gluster command output : 
----------------------------------------
[root@rhs-client11 ~]# gluster volume add-brick vol-dis-rep 10.70.36.35:/rhs/brick1/n1 10.70.36.36:/rhs/brick1/n2
volume add-brick: success


2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 2 Susant Kumar Palai 2015-11-27 12:35:54 UTC
*** Bug 1286080 has been marked as a duplicate of this bug. ***

Comment 3 Susant Kumar Palai 2015-11-30 09:52:51 UTC
*** Bug 1286082 has been marked as a duplicate of this bug. ***

Comment 5 Mike McCune 2016-03-28 23:31:34 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 6 Nithya Balachandran 2016-06-03 09:51:17 UTC
This has been fixed in RHGS 3.1.2.


[root@localhost ~]# rpm -qa |grep gluster
glusterfs-3.7.5-19.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-19.el7rhgs.x86_64
gluster-nagios-common-0.2.3-1.el7rhgs.noarch
glusterfs-api-3.7.5-19.el7rhgs.x86_64
glusterfs-cli-3.7.5-19.el7rhgs.x86_64
glusterfs-rdma-3.7.5-19.el7rhgs.x86_64
python-gluster-3.7.5-19.el7rhgs.noarch
gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-19.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch
glusterfs-fuse-3.7.5-19.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-19.el7rhgs.x86_64
glusterfs-libs-3.7.5-19.el7rhgs.x86_64
glusterfs-server-3.7.5-19.el7rhgs.x86_64




[root@localhost ~]# gluster v rebalance vol1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             1             0             0          in progress              13.00
volume rebalance: vol1: success


Note You need to log in before you can comment on or make changes to this bug.