+++ This bug was initially created as a clone of Bug #1163561 +++ Description of problem: I create a disperse volume (disperse 3 redundancy 1), copy same files and directoys to mountpoint, then kill one child-glusterfsd. I delete all the files and directorys frome mountpoint, then execute "gluster volume start test force", restart the killed child. the restart child can not clean the remaining files and directorys where has been delelte from mountpoint. I can execute create/read/write/delete files, directorys as usual in mountpoint, even the same as remaining files and directorys. But it still remaining. Version-Release number of selected component (if applicable): 3.6.1 How reproducible: Steps to Reproduce: 1. create a disperse volume Volume Name: test Type: Distributed-Disperse Volume ID: 1841beb3-001d-45b8-9d6c-6c34cfbfd6d0 Status: Started Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: 10.10.21.50:/sda Brick2: 10.10.21.52:/sda Brick3: 10.10.21.50:/sdb Brick4: 10.10.21.52:/sdb Brick5: 10.10.21.50:/sdc Brick6: 10.10.21.52:/sdc 2.copy many files and directorys to mountpoint 3.kill Brick1-glusterfsd program. 4. execute "rm -rvf /mountpoint/*" 5. execute "gluster volume start test force", restart Brick1 6. I can execute create/read/write/delete files, directorys as usual in mountpoint, even the same as remaining files and directorys. But Brick1's dirty data still remaining. Actual results: Expected results: Additional info:
Cloned for 3.6.2 (from master)
For fixing this bug, we had to bring in some version incompatible changes and directory self-heal implementation, so it can't be backported to 3.6.x. Please feel free to upgrade to 3.7.x where this bug is fixed: https://bugzilla.redhat.com/show_bug.cgi?id=1163561 Pranith