Bug 1029235

Summary: Simple brick operations shouldn't fail
Product: [Community] GlusterFS Reporter: purpleidea
Component: coreAssignee: bugs <bugs>
Status: CLOSED EOL QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.4.1CC: bugs, gluster-bugs, purpleidea
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-07 13:17:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description purpleidea 2013-11-11 23:36:00 UTC
Description of problem:

Sorry if this is assigned to wrong component. I wasn't 100% sure which is right.

All tests are done on gluster 3.4.1, using CentOS 6.4 on vm's.
Firewall has been disabled for testing purposes.



Simple operations shouldn't fail
# running the following commands in succession without files:
# gluster volume add-brick examplevol vmx1.example.com:/tmp/foo9
vmx2.example.com:/tmp/foo9
# gluster volume remove-brick examplevol vmx1.example.com:/tmp/foo9
vmx2.example.com:/tmp/foo9 start ... status

shows a failure:

[root@vmx1 ~]# gluster volume add-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9
volume add-brick: success
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 status
                                    Node Rebalanced-files          size
scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------
-----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes
0             0    not started             0.00
                        vmx2.example.com                0        0Bytes
0             0    not started             0.00
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 start
volume remove-brick start: success
ID: ecbcc2b6-4351-468a-8f53-3a09159e4059
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 status
                                    Node Rebalanced-files          size
scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------
-----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes
8             0      completed             0.00
                        vmx2.example.com                0        0Bytes
0             1         failed             0.00

# ^^^^ the failure is seen right here...

[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 commit
Removing brick(s) can result in data loss. Do you want to Continue?
(y/n) y
volume remove-brick commit: success
[root@vmx1 ~]# 


Version-Release number of selected component (if applicable):
gluster --version
glusterfs 3.4.1 built on Sep 27 2013 13:13:58

How reproducible:
100%

Steps to Reproduce:
1. See above.
2.
3.

Actual results:
Failure

Expected results:
Success

Additional info:
It might be that the actual operation worked as expected, and this is just a UI bug, but it's impossible for me to tell. In any case, this is very easy to reproduce. Cheers.

Comment 1 Niels de Vos 2015-05-17 22:00:51 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 2 Kaleb KEITHLEY 2015-10-07 13:17:26 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.