When both bricks of the 1x2 replica are on the same node, two shd threads do the full heal on the same file by taking non-blocking locks in shd domain. One thread may acquire lock on one brick while the second thread on another brick.Consequently neither will get the lock and heal will be skipped for the file.
REVIEW: http://review.gluster.org/10530 (tests: Fix failures in basic/afr/data-self-heal.t) posted (#1) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/10530 (tests: data-self-heal.t-create files from the mount point.) posted (#2) for review on master by Ravishankar N (ravishankar)
Not sure why BZ did not pick up the fact that the patch was merged. For some reason, the topic for the bug seems to be rfc. Closing the bug.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user