Bug 765327 - (GLUSTER-3595) Replace-brick status says migration complete even though data-self heal hasn't been completed.
Replace-brick status says migration complete even though data-self heal hasn'...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.3-beta
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: krishnan parthasarathi
: Triaged
Depends On:
Blocks: 854626
  Show dependency treegraph
 
Reported: 2011-09-20 08:15 EDT by Vijaykumar
Modified: 2015-11-03 18:03 EST (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 854626 (view as bug list)
Environment:
Last Closed: 2013-07-24 13:42:40 EDT
Type: ---
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar 2011-09-20 08:15:24 EDT
I have a distribute replicate setup 2x2, mounted on a fuse. 
i started replacing a brick, then i paused and then aborted it. Later i again started the replace-brick on same bricks. When it got succeeded , i checked the status, it said migration completed. But if you look in the back end, all files are not completely self-healed. But after sometime all files got self healed.
Comment 1 Vijaykumar 2011-10-17 05:42:34 EDT
I followed the same thing with 3.2.4 on RHEL-6.1 , i am observing the same behavior.
Comment 2 Amar Tumballi 2012-07-11 02:43:18 EDT
need to check if its happening on 3.3.0
Comment 3 Amar Tumballi 2013-01-11 02:57:03 EST
as per comment #2, need to check the behavior in glusterfs-3.4.0qa releases.
Comment 4 hans 2013-06-07 04:46:45 EDT
Not 3.4.0qa but this reproduces on 3.3.2qa3 :

stor1:~/ gluster volume replace-brick vol01 stor1:/brick/e stor3:/brick/b status
Number of files migrated = 3385012        Migration complete

stor1:~/ df -h /brick/e
Filesystem            Size  Used Avail Use% Mounted on
/dev/sde1             1.8T  1.5T  372G  80% /brick/e

stor3:~/ df -h /brick/b
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             1.8T  122G  1.7T   7% /brick/b

Clearly over a TiB of data is missing.

Note You need to log in before you can comment on or make changes to this bug.