Hide Forgot
I have a distribute replicate setup 2x2, mounted on a fuse. i started replacing a brick, then i paused and then aborted it. Later i again started the replace-brick on same bricks. When it got succeeded , i checked the status, it said migration completed. But if you look in the back end, all files are not completely self-healed. But after sometime all files got self healed.
I followed the same thing with 3.2.4 on RHEL-6.1 , i am observing the same behavior.
need to check if its happening on 3.3.0
as per comment #2, need to check the behavior in glusterfs-3.4.0qa releases.
Not 3.4.0qa but this reproduces on 3.3.2qa3 : stor1:~/ gluster volume replace-brick vol01 stor1:/brick/e stor3:/brick/b status Number of files migrated = 3385012 Migration complete stor1:~/ df -h /brick/e Filesystem Size Used Avail Use% Mounted on /dev/sde1 1.8T 1.5T 372G 80% /brick/e stor3:~/ df -h /brick/b Filesystem Size Used Avail Use% Mounted on /dev/sdb1 1.8T 122G 1.7T 7% /brick/b Clearly over a TiB of data is missing.