Bug 860548 - "gluster volume heal <vol_name> info healed" command output should list the files that are really healed and shouldn't list the files that were attempted to self-heal
"gluster volume heal <vol_name> info healed" command output should list the f...
Status: CLOSED DUPLICATE of bug 863068
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Vijay Bellur
: FutureFeature
Depends On:
  Show dependency treegraph
Reported: 2012-09-26 02:34 EDT by spandura
Modified: 2012-11-23 09:36 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-11-23 09:36:49 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description spandura 2012-09-26 02:34:38 EDT
Description of problem:
"gluster volume heal <vol_name> info healed" command currently lists all the files that were attempted to self-heal + files that were really self-healed by the self-heal daemon process. 

But not all the files in the ".glusterfs/indices/xattrop" directory requires self-heal as they are written to indices directory during the writes from the mount point. By the time the self-heal daemon sees the files in indices directory and tries to self-heal, there might not be anything to self-heal at all. But these files are also listed in the "info healed" output.This is a bit confusing. Hence it would be appropriate to only list files that were synced to bricks which needed self-heal. 

Version-Release number of selected component (if applicable):
glusterfs 3.3.0rhs built on Sep 10 2012 00:49:11

Steps to Reproduce:
1.Create a distribute-replicate volume (2x2. 4 servers with 1 brick on each server)
2. start the volume.
3. create a fuse mount. create files and directories from the mount point. 
4. bring down brick2 from the replicate-0 
5. create new files and directories from mount point.
6. bring back the brick "brick2" of replicate-0. 
7. self-heal daemon starts the self-heal within 10 minutes or execute "gluster volume heal <vol_name>" to start the self-heal imeediately. 
7. execute: "gluster volume heal <vol_name> info healed" 
Actual results:
Lists the files that were healed to brick2 of the replicate-0. Also saw a file healed to brick4 of the replicate-1. 

[root@darrel arequal]# gluster volume heal vol-abcd info healed
Heal operation on volume vol-abcd has been successful

Number of entries: 1
at                    path on brick
2012-09-25 05:08:13 /test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.4/file.4

Number of entries: 0

Number of entries: 0

Number of entries: 1
at                    path on brick
2012-09-25 05:08:10 /test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.2/file.6

The "/test_gfid_self_heal/l1_dir.2/l2_dir.5/l3_dir.2/file.6" file actually was not self-healed or synced from brick3 (Brick to brick4 (Brick . But still it's showed in the info healed output.
Comment 2 vsomyaju 2012-11-23 09:36:49 EST

*** This bug has been marked as a duplicate of bug 863068 ***

Note You need to log in before you can comment on or make changes to this bug.