Hide Forgot
We know that replicate needs to do a lookup from multiple subvolumes on a fresh lookup request. It is possible that the inode number returned in this new lookup is different from the ino returned to the previous lookup for the same file. This behaviour can break things for tools like unfs3 which depend on the same inode number, in order to map file handles to files. Right now, libglusterfsclient works around this by avoiding the pruning of inodes. This has its limits since the memory consumption by the inode table will constantly increase. This is not a bug but I think more of a feature required for a narrow bunch of tools. I am alright with having an option in replicate that forces it to return inode from one particular sub-volume when this option is set. In other cases, it does sound like a better idea to keep the current behaviour of round-robin'ing or load-balancing the fresh lookups among the various subvolumes.
Fix committed to release-2.0 branch. http://git.savannah.gnu.org/cgit/gluster.git/commit/?h=release-2.0&id=b23c9fcc8a16b8c4a4b1814ff5035a18f03da0f4 Pending on mainline. I'll defer changing the status till the time when I've done some tests with unfs3 and libglusterfsclient.
PATCH: http://patches.gluster.com/patch/722 in master (Return inode number always from the first up subvolume in AFR.)
Committed to master also: http://git.savannah.gnu.org/cgit/gluster.git/commit/?id=161188e919968f1d782e857151f2f4dca1fdfc22
In recent tests using unfs3, booster and replicate and distribute, I do not see any failures due to stale file handles. This is with version 0.5 of unfs3booster and the patches that will be part of 2.0.5 release.