Bug 1029237 - Extra erroneous row seen in status output
Summary: Extra erroneous row seen in status output
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 3.4.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-11 23:44 UTC by purpleidea
Modified: 2015-10-07 13:19 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-07 13:19:00 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description purpleidea 2013-11-11 23:44:43 UTC
Description of problem:

Sorry if this is assigned to wrong component. I wasn't 100% sure which is right.

All tests are done on gluster 3.4.1, using CentOS 6.4 on vm's.
Firewall has been disabled for testing purposes.

To reproduce this bug, I first follow the exact steps outlined in:
https://bugzilla.redhat.com/show_bug.cgi?id=1029235

I Have pasted them here for reference:
#>>>

[root@vmx1 ~]# gluster volume add-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9
volume add-brick: success
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 status
                                    Node Rebalanced-files          size
scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------
-----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes
0             0    not started             0.00
                        vmx2.example.com                0        0Bytes
0             0    not started             0.00
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 start
volume remove-brick start: success
ID: ecbcc2b6-4351-468a-8f53-3a09159e4059
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 status
                                    Node Rebalanced-files          size
scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------
-----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes
8             0      completed             0.00
                        vmx2.example.com                0        0Bytes
0             1         failed             0.00

[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 commit
Removing brick(s) can result in data loss. Do you want to Continue?
(y/n) y
volume remove-brick commit: success
[root@vmx1 ~]# 

#<<<


On the other node, the output shows an *extra row* (also including
the failure)

[root@vmx2 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 status
                                    Node Rebalanced-files          size
scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------
-----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes
0             0      completed             0.00
                               localhost                0        0Bytes
0             0      completed             0.00
                        vmx1.example.com                0        0Bytes
0             1         failed             0.00


There shouldn't be a failure (rhbz#:1029235) but in addition, there shouldn't be the erroneous (extra) row shown. Not sure if related to the other bug, but this should give you an easy way to reproduce it.


Version-Release number of selected component (if applicable):
gluster --version
glusterfs 3.4.1 built on Sep 27 2013 13:13:58

How reproducible:
100%
Also Kaushal M has mentioned that he has seen this bug a number of times, and asked me to open the ticket.

Steps to Reproduce:
1. Follow above steps.
2.
3.

Actual results:
See erroneous row.

Expected results:
No erroneous row seen.

Additional info:
I didn't check that the --xml output is consistent, however that would be something to check before committing the fix.

Comment 1 Niels de Vos 2015-05-17 22:00:38 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 2 Kaleb KEITHLEY 2015-10-07 13:19:00 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.