Description of problem: If source brick was killed while replace-brick operation in progress, a subsequent replace-brick abort will result in hang of glusterd. Though glusterd seems to be in _Interruptible sleep_ ('S' state of ps output), one cannot attach gdb or strace to glusterd process. Even other commands on gluster-cli fail. However attaching strace to glusterd process even before abort was attempted showed that glusterd to be hung in lsetxattr syscall. A statedump of client - a maintainance mount - and src brick revealed that setxattr call to be stuck in pump translator. Code analysis with KP pointed the cause to crawl operation not being started after restart of brick. Version-Release number of selected component (if applicable): 8b6534031ab9b60da293e9c2ffb95141d714f973 How reproducible: Consistently Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
*** Bug 787123 has been marked as a duplicate of this bug. ***
patch sent @ http://review.gluster.com/3264
*** Bug 818519 has been marked as a duplicate of this bug. ***
*** Bug 797729 has been marked as a duplicate of this bug. ***
CHANGE: http://review.gluster.org/3275 (glusterd: Made dst brick's port info available to all peers) merged in master by Vijay Bellur (vbellur)
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user