Description of problem: ----------------------- In a single-node cluster, when remove-brick is in progress, glusterd is killed and then brought back up. Following this, 'gluster volume status' command fails on the node - [root@rhs ~]# gluster v status test_dis Commit failed on localhost. Please check the log file for more details. The following errors are seen in the glusterd logs - [2013-11-07 03:02:59.984190] I [glusterd-handler.c:3498:__glusterd_handle_status_volume] 0-management: Received status volume req for volume test_dis [2013-11-07 03:02:59.984708] E [glusterd-op-sm.c:1973:_add_remove_bricks_to_dict] 0-management: Failed to get brick count [2013-11-07 03:02:59.984737] E [glusterd-op-sm.c:2037:_add_task_to_dict] 0-management: Failed to add remove bricks to dict [2013-11-07 03:02:59.984753] E [glusterd-op-sm.c:2122:glusterd_aggregate_task_status] 0-management: Failed to add task details to dict [2013-11-07 03:02:59.984768] E [glusterd-syncop.c:993:gd_commit_op_phase] 0-management: Commit of operation 'Volume Status' failed on localhost Version-Release number of selected component (if applicable): glusterfs 3.4.0.35.1u2rhs How reproducible: Always Steps to Reproduce: 1. Create a distribute volume with two bricks, start it, fuse mount it and create some data on the mount point. 2. Start remove-brick of one of the bricks. 3. While remove-brick is in progress, kill glusterd and start it again. 4. Check volume status - # gluster volume status Actual results: The command fails with the following message - Commit failed on localhost. Please check the log file for more details. Expected results: The command should not fail. Additional info:
Created attachment 820981 [details] sosreport
Because of this problem, RHSC does not update the icon and task does not get updated
Verified as fixed in glusterfs 3.4.0.50rhs. Volume status command is successful after restarting glusterd while remove-brick is in progress.
Can you please verify the doc text for technical accuracy?
Doc text looks okay.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html