Hide Forgot
The absence of self-heal is noticed only on the unfs3-booster exports not on FUSE based exports. The problem was seen on the storage platform with fully loaded configuration but is seen even with simple 2 backends replicated over protocol/clients exported through unfs3-booster.
Here is a probable explanation for self-heal not happening. When the touch,i.e. create operation returns, a file handle is returned to the NFS client. When this file handle is returned, the second node is down. Once the second node comes back up, an ls -lR is done on the NFS mount point. On this ls -lR, since the NFS client already has the file handle for the newly created file, it does a GETATTR on this file handle. At unfsd, the file handle is translated into the path because it is already in the fh-cache, followed by a stat on the file and not necessarily on the directory in which the file was created. Since a stat can be served even with one node down, the ls -lR succeeds. In the absence of a stat on the directory, self-heal does not get triggered.
Reported by davide.damico: Hi, I'm following gluster development for a long time and I think it's a great project. The gluster storage is amazing and today I was trying it to understand if it fills my needs. I created a mirrored volume and I mounted the share using nfs protocol on a freebsd machine. Everything is fine (except an initial NFS stale handle message) but if I simulate a node-down detaching the network cable, writing a file and then attaching again the second node, I don't see the file I wrote during its down period. Am I missing anything? Thanks in advance, d. ================================================== I can confirm that self-heal does not get triggered on the glusterfsd backend which was down when the file was touched by the user on the NFS mount-point.
Closing this bug as there is not much reason to continue using booster with unfsd.