Bug 1016494 - Volume status operation after remove-brick is started on a volume fails, until remove-brick commit or remove-brick stop is done.
Volume status operation after remove-brick is started on a volume fails, unti...
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: bugs@gluster.org
Depends On:
  Show dependency treegraph
Reported: 2013-10-08 05:17 EDT by Shruti Sampat
Modified: 2015-10-07 08:15 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-10-07 08:15:59 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
glusterd logs (9.63 MB, text/x-log)
2013-10-08 05:17 EDT, Shruti Sampat
no flags Details

  None (edit)
Description Shruti Sampat 2013-10-08 05:17:50 EDT
Created attachment 809187 [details]
glusterd logs

Description of problem:
After starting remove-brick on a volume, the gluster volume status command always fails with the following message - 

[root@rhs ~]# gluster v status dis_rep_vol
Commit failed on localhost. Please check the log file for more details.

Once remove-brick commit, or remove-brick stop is performed, volume status works fine.

Version-Release number of selected component (if applicable):
glusterfs 3.4.1rc1

How reproducible:

Steps to Reproduce:
1. Start remove-brick on a volume.
2. Check volume status for that volume  

Actual results:
Volume status command fails with above described message. After remove-brick is committed or stopped, volume status works.

Expected results:
Volume status command should display the status of the volume.

Additional info:
Comment 1 Niels de Vos 2015-05-17 18:00:53 EDT
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs@gluster.org".

If there is no response by the end of the month, this bug will get automatically closed.
Comment 2 Kaleb KEITHLEY 2015-10-07 08:15:59 EDT
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Note You need to log in before you can comment on or make changes to this bug.