+++ This bug was initially created as a clone of Bug #1223338 +++
Description of problem:
Glusterd process could crash while executing remove-brick-status command around the time when the local remove-brick process (i.e, rebalance process) has completed migrating data.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create and start a volume.
2. Add files/directories as required.
3. Remove one or more bricks using remove-brick-start command.
4. Issue remove-brick-status command around the time when the local remove-brick
process is completed.
glusterd process crashes.
glusterd shouldn't crash. It would be helpful if the remove-brick-status command failed saying that the rebalance process may have just completed with
migration of data from the bricks being removed.
The above steps are representative of when the issue can be seen but not really helpful if you wish to automate this. The following link leads to the regression test, as part of GlusterFS regression test suite, that has hit this problem more often. This could help those interested in automation.
The required changes to fix this bug have not made it into glusterfs-3.7.1. This bug is now getting tracked for glusterfs-3.7.2.
http://review.gluster.org/#/c/10932/ has been merged
Unfortunately glusterfs-3.7.2 did not contain a code change that was associated with this bug report. This bug is now proposed to be a blocker for glusterfs-3.7.3.
I think this bug has been already fixed in 3.7.1. Some how the release notes didn't capture it, not sure why?
(In reply to Atin Mukherjee from comment #4)
> I think this bug has been already fixed in 3.7.1. Some how the release notes
> didn't capture it, not sure why?
Because the BUG: tag in the patch refers to bug 1225318 and not this one.
*** This bug has been marked as a duplicate of bug 1225318 ***