Description of problem: ====================== I hit this bug while working on different scenarios to verify 1437773 - Undo pending xattrs only on the up bricks (note that this has NOTHING to do with the above fix ie it is NOT A regression by the above fix) When we do a rename of an existing file when one brick is down and again do a rename of the same old file when the previous offline brick comes up and the online one goes down, a conservative merge can lead to both the rename entries existing on the storage. This means two files for same data. However I don't see any storage space impact as both files are having same inode , meaning they are mimicing hardlinks. The problem is inconvenience and confusion for the end users and unncessary hardlinks created Should be a day1 bug Version-Release number of selected component (if applicable): ========= 3.8.4-28 How reproducible: ======== always Steps to Reproduce: 1.create 1x2 volume b1 and b2 are replicas 2.create file f1 3.bring down b1 and rename f1->fx1 4. bring down b2 and bring up b1 5. rename f1->fy1 6. bring up b2 7. wait for heal to complete Actual results: on mount we can see both fx1 and fy1 and both obviously have same gfids Expected results: ======== need to handle such rename cases
can see this even with 1x3, but obviously set quorum as none
Closing based on comment #4 since the impact is just 'data gain' (as opposed to data-loss) due to extra hard links being created as a part of self-heal process.