Bug 1015990 - Implementation of command to get the count of entries to be healed for each brick
Implementation of command to get the count of entries to be healed for each b...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: replicate (Show other bugs)
pre-release
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: vsomyaju
:
Depends On:
Blocks: 1286820 1522729 1100398
  Show dependency treegraph
 
Reported: 2013-10-07 04:35 EDT by vsomyaju
Modified: 2017-12-06 05:30 EST (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-04-17 07:49:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description vsomyaju 2013-10-07 04:35:15 EDT
Description of problem:

cluster/afr: [Feature] Command implementation to get heal-count
    
    Currently to know the number of files to be healed, either user
    has to go to backend and check the number of entries present in
    indices/xattrop directory. But if a volume consists of large
    number of bricks, going to each backend and counting the number
    of entries is a time-taking task. Otherwise user can give
    gluster volume heal vol-name info command but with this
    approach if no. of entries are very hugh in the indices/
    xattrop directory, it will comsume time.
    
    So as a feature, new command is implemented.
    
    Command 1: gluster volume heal vn statistics heal-count
    This command will get the number of entries present in
    every brick of a volume. The output displays only entries
    count.
    
    Command 2: gluster volume heal vn statistics heal-count
               replica 192.168.122.1:/home/user/brickname
    
               Here if we are concerned with just one replica.
    So providing any one of the brick of a replica will get
    the number of entries to be healed for that replica only.
    
    Example:
    Replicate volume with replica count 2.
    
    Backend status:
    --------------
    [root@dhcp-0-17 xattrop]# ls -lia | wc -l
    1918
    
    NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
    entries so actual no. of entries to be healed are
    1916.
    
    [root@dhcp-0-17 xattrop]# pwd
    /home/user/2ty/.glusterfs/indices/xattrop
    
    Command output:
    --------------
    Gathering count of entries to be healed on volume volume3 has been successful
    
    Brick 192.168.122.1:/home/user/22iu
    Status: Brick is Not connected
    Entries count is not available

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 1 Anand Avati 2013-10-07 04:36:37 EDT
REVIEW: http://review.gluster.org/6044 (cluster/afr: [Feature] Command implementation to get heal-count) posted (#1) for review on master by venkatesh somyajulu (vsomyaju@redhat.com)
Comment 2 Anand Avati 2013-10-11 11:24:07 EDT
REVIEW: http://review.gluster.org/6082 (test-scripts: test scripts for statistics command) posted (#1) for review on master by venkatesh somyajulu (vsomyaju@redhat.com)
Comment 3 Anand Avati 2013-10-14 15:02:55 EDT
REVIEW: http://review.gluster.org/6044 (cluster/afr: [Feature] Command implementation to get heal-count) posted (#2) for review on master by Anand Avati (avati@redhat.com)
Comment 4 Anand Avati 2013-10-14 17:42:11 EDT
COMMIT: http://review.gluster.org/6044 committed in master by Anand Avati (avati@redhat.com) 
------
commit 75caba63714c7f7f9ab810937dae69a1a28ece53
Author: Venkatesh Somyajulu <vsomyaju@redhat.com>
Date:   Mon Oct 7 13:47:47 2013 +0530

    cluster/afr: [Feature] Command implementation to get heal-count
    
    Currently to know the number of files to be healed, either user
    has to go to backend and check the number of entries present in
    indices/xattrop directory. But if a volume consists of large
    number of bricks, going to each backend and counting the number
    of entries is a time-taking task. Otherwise user can give
    gluster volume heal vol-name info command but with this
    approach if no. of entries are very hugh in the indices/
    xattrop directory, it will comsume time.
    
    So as a feature, new command is implemented.
    
    Command 1: gluster volume heal vn statistics heal-count
    This command will get the number of entries present in
    every brick of a volume. The output displays only entries
    count.
    
    Command 2: gluster volume heal vn statistics heal-count
               replica 192.168.122.1:/home/user/brickname
    
               Here if we are concerned with just one replica.
    So providing any one of the brick of a replica will get
    the number of entries to be healed for that replica only.
    
    Example:
    Replicate volume with replica count 2.
    
    Backend status:
    --------------
    [root@dhcp-0-17 xattrop]# ls -lia | wc -l
    1918
    
    NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
    entries so actual no. of entries to be healed are
    1916.
    
    [root@dhcp-0-17 xattrop]# pwd
    /home/user/2ty/.glusterfs/indices/xattrop
    
    Command output:
    --------------
    Gathering count of entries to be healed on volume volume3 has been successful
    
    Brick 192.168.122.1:/home/user/22iu
    Status: Brick is Not connected
    Entries count is not available
    
    Brick 192.168.122.1:/home/user/2ty
    Number of entries: 1916
    
    Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
    BUG: 1015990
    Signed-off-by: Venkatesh Somyajulu <vsomyaju@redhat.com>
    Reviewed-on: http://review.gluster.org/6044
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Anand Avati <avati@redhat.com>
Comment 5 Niels de Vos 2014-04-17 07:49:11 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.