Description of problem: ========================== The entries in indices/xattrop for deleted files/directories when one of the brick in a replicate volume is offline are not removed from indices/xattrop directory after the self-heal is completed. Version-Release number of selected component (if applicable): ============================================================ root@rhsauto015 [10:17:21]> rpm -qa | grep gluster glusterfs-3.3.0.5rhs-40.el6rhs.x86_64 root@rhsauto015 [10:17:27]> gluster --version glusterfs 3.3.0.5rhs built on Dec 19 2012 02:10:50 Repository revision: v3.3.0-141-g6e3efac How reproducible: ==================== Often Steps to Reproduce: ===================== 1. Create a replicate volume ( 1 x 2 ). 2. set self-heal-daemon to off and background-self-heal-count to 0 3. Start the volume 4. Create a fuse mount 5. from the mount, create files and directories. 6. bring down brick "brick1" 7. delete all the directories and files created in step 5 8. ensure "indices/xattrop" directory of "brick2" lists the files to be removed from brick1 once it's online. 9. bring back brick "brick1" 10. from mount point execute : "find . | xargs stat" 11. After the self-heal is complete , check the "indices/xattrop" directory of "brick2" Actual results: ================ even though the self-heal is complete, the "indices/xattrop" directory on "brick2" still contains the stale entries of files/directories that were deleted. Expected results: ============== The stale entries should be removed from "indices/xattrop" directory.
Found this bug in afr-testday by Rachana and Shruti . Thank you Rachana and Shruti
Per Feb-13 bug triage meeting, targeting for 2.1.0.
This is going to be addressed by new changelog translator.
This issue happens only when self-heal daemon is set to off. One way to fix this issue would be run the crawl threads in self-heal-daemon but only attempt removal of stale entries even when self-heal-daemon is set to off.
adding 3.0 flag and removing 2.1.z
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version. [1] https://rhn.redhat.com/errata/RHSA-2014-0821.html