This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours

Bug 765327 (GLUSTER-3595)

Summary: Replace-brick status says migration complete even though data-self heal hasn't been completed.
Product: [Community] GlusterFS Reporter: Vijaykumar <vijaykumar>
Component: glusterdAssignee: krishnan parthasarathi <kparthas>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 3.3-betaCC: amarts, gluster-bugs, hans, nsathyan
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.4.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 854626 (view as bug list) Environment:
Last Closed: 2013-07-24 13:42:40 EDT Type: ---
Regression: --- Mount Type: fuse
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Bug Depends On:    
Bug Blocks: 854626    

Description Vijaykumar 2011-09-20 08:15:24 EDT
I have a distribute replicate setup 2x2, mounted on a fuse. 
i started replacing a brick, then i paused and then aborted it. Later i again started the replace-brick on same bricks. When it got succeeded , i checked the status, it said migration completed. But if you look in the back end, all files are not completely self-healed. But after sometime all files got self healed.
Comment 1 Vijaykumar 2011-10-17 05:42:34 EDT
I followed the same thing with 3.2.4 on RHEL-6.1 , i am observing the same behavior.
Comment 2 Amar Tumballi 2012-07-11 02:43:18 EDT
need to check if its happening on 3.3.0
Comment 3 Amar Tumballi 2013-01-11 02:57:03 EST
as per comment #2, need to check the behavior in glusterfs-3.4.0qa releases.
Comment 4 hans 2013-06-07 04:46:45 EDT
Not 3.4.0qa but this reproduces on 3.3.2qa3 :

stor1:~/ gluster volume replace-brick vol01 stor1:/brick/e stor3:/brick/b status
Number of files migrated = 3385012        Migration complete

stor1:~/ df -h /brick/e
Filesystem            Size  Used Avail Use% Mounted on
/dev/sde1             1.8T  1.5T  372G  80% /brick/e

stor3:~/ df -h /brick/b
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             1.8T  122G  1.7T   7% /brick/b

Clearly over a TiB of data is missing.