Bug 1027699 - 'gluster volume status' command fails on a server after glusterd is brought down and back up, while remove-brick is in progress
'gluster volume status' command fails on a server after glusterd is brought d...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: RHGS 2.1.2
Assigned To: Kaushal
Shruti Sampat
: ZStream
Depends On: 1040809
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-07 05:05 EST by Shruti Sampat
Modified: 2015-05-15 14:19 EDT (History)
11 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.50rhs-1
Doc Type: Bug Fix
Doc Text:
Previously, the gluster volume status command would fail on a node when glusterd was restated while remove brick operation was in progress. With this fix, the command works as expected.
Story Points: ---
Clone Of:
: 1040809 (view as bug list)
Environment:
Last Closed: 2014-02-25 03:01:45 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
sosreport (15.56 MB, application/x-xz)
2013-11-07 05:18 EST, Shruti Sampat
no flags Details

  None (edit)
Description Shruti Sampat 2013-11-07 05:05:47 EST
Description of problem:
-----------------------

In a single-node cluster, when remove-brick is in progress, glusterd is killed and then brought back up. Following this, 'gluster volume status' command fails on the node - 

[root@rhs ~]# gluster v status test_dis 
Commit failed on localhost. Please check the log file for more details.

The following errors are seen in the glusterd logs - 

[2013-11-07 03:02:59.984190] I [glusterd-handler.c:3498:__glusterd_handle_status_volume] 0-management: Received status volume req for volume test_dis
[2013-11-07 03:02:59.984708] E [glusterd-op-sm.c:1973:_add_remove_bricks_to_dict] 0-management: Failed to get brick count
[2013-11-07 03:02:59.984737] E [glusterd-op-sm.c:2037:_add_task_to_dict] 0-management: Failed to add remove bricks to dict
[2013-11-07 03:02:59.984753] E [glusterd-op-sm.c:2122:glusterd_aggregate_task_status] 0-management: Failed to add task details to dict
[2013-11-07 03:02:59.984768] E [glusterd-syncop.c:993:gd_commit_op_phase] 0-management: Commit of operation 'Volume Status' failed on localhost    

Version-Release number of selected component (if applicable):
glusterfs 3.4.0.35.1u2rhs

How reproducible:
Always

Steps to Reproduce:
1. Create a distribute volume with two bricks, start it, fuse mount it and create some data on the mount point.
2. Start remove-brick of one of the bricks.
3. While remove-brick is in progress, kill glusterd and start it again.
4. Check volume status - 
# gluster volume status

Actual results:
The command fails with the following message - 

Commit failed on localhost. Please check the log file for more details.

Expected results:
The command should not fail.

Additional info:
Comment 1 Shruti Sampat 2013-11-07 05:18:43 EST
Created attachment 820981 [details]
sosreport
Comment 2 Dusmant 2013-11-07 05:20:13 EST
Because of this problem, RHSC does not update the icon and task does not get updated
Comment 3 Shruti Sampat 2013-12-19 04:43:09 EST
Verified as fixed in glusterfs 3.4.0.50rhs.

Volume status command is successful after restarting glusterd while remove-brick is in progress.
Comment 4 Pavithra 2014-01-03 01:18:47 EST
Can you please verify the doc text for technical accuracy?
Comment 5 Kaushal 2014-01-03 02:15:56 EST
Doc text looks okay.
Comment 7 errata-xmlrpc 2014-02-25 03:01:45 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.