Bug 1015990 - Implementation of command to get the count of entries to be healed for each brick
Summary: Implementation of command to get the count of entries to be healed for each b...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: pre-release
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: vsomyaju
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1100398 1286820 1522729 1531931
TreeView+ depends on / blocked
 
Reported: 2013-10-07 08:35 UTC by vsomyaju
Modified: 2018-01-06 18:15 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.5.0
Clone Of:
Environment:
Last Closed: 2014-04-17 11:49:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description vsomyaju 2013-10-07 08:35:15 UTC
Description of problem:

cluster/afr: [Feature] Command implementation to get heal-count
    
    Currently to know the number of files to be healed, either user
    has to go to backend and check the number of entries present in
    indices/xattrop directory. But if a volume consists of large
    number of bricks, going to each backend and counting the number
    of entries is a time-taking task. Otherwise user can give
    gluster volume heal vol-name info command but with this
    approach if no. of entries are very hugh in the indices/
    xattrop directory, it will comsume time.
    
    So as a feature, new command is implemented.
    
    Command 1: gluster volume heal vn statistics heal-count
    This command will get the number of entries present in
    every brick of a volume. The output displays only entries
    count.
    
    Command 2: gluster volume heal vn statistics heal-count
               replica 192.168.122.1:/home/user/brickname
    
               Here if we are concerned with just one replica.
    So providing any one of the brick of a replica will get
    the number of entries to be healed for that replica only.
    
    Example:
    Replicate volume with replica count 2.
    
    Backend status:
    --------------
    [root@dhcp-0-17 xattrop]# ls -lia | wc -l
    1918
    
    NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
    entries so actual no. of entries to be healed are
    1916.
    
    [root@dhcp-0-17 xattrop]# pwd
    /home/user/2ty/.glusterfs/indices/xattrop
    
    Command output:
    --------------
    Gathering count of entries to be healed on volume volume3 has been successful
    
    Brick 192.168.122.1:/home/user/22iu
    Status: Brick is Not connected
    Entries count is not available

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2013-10-07 08:36:37 UTC
REVIEW: http://review.gluster.org/6044 (cluster/afr: [Feature] Command implementation to get heal-count) posted (#1) for review on master by venkatesh somyajulu (vsomyaju)

Comment 2 Anand Avati 2013-10-11 15:24:07 UTC
REVIEW: http://review.gluster.org/6082 (test-scripts: test scripts for statistics command) posted (#1) for review on master by venkatesh somyajulu (vsomyaju)

Comment 3 Anand Avati 2013-10-14 19:02:55 UTC
REVIEW: http://review.gluster.org/6044 (cluster/afr: [Feature] Command implementation to get heal-count) posted (#2) for review on master by Anand Avati (avati)

Comment 4 Anand Avati 2013-10-14 21:42:11 UTC
COMMIT: http://review.gluster.org/6044 committed in master by Anand Avati (avati) 
------
commit 75caba63714c7f7f9ab810937dae69a1a28ece53
Author: Venkatesh Somyajulu <vsomyaju>
Date:   Mon Oct 7 13:47:47 2013 +0530

    cluster/afr: [Feature] Command implementation to get heal-count
    
    Currently to know the number of files to be healed, either user
    has to go to backend and check the number of entries present in
    indices/xattrop directory. But if a volume consists of large
    number of bricks, going to each backend and counting the number
    of entries is a time-taking task. Otherwise user can give
    gluster volume heal vol-name info command but with this
    approach if no. of entries are very hugh in the indices/
    xattrop directory, it will comsume time.
    
    So as a feature, new command is implemented.
    
    Command 1: gluster volume heal vn statistics heal-count
    This command will get the number of entries present in
    every brick of a volume. The output displays only entries
    count.
    
    Command 2: gluster volume heal vn statistics heal-count
               replica 192.168.122.1:/home/user/brickname
    
               Here if we are concerned with just one replica.
    So providing any one of the brick of a replica will get
    the number of entries to be healed for that replica only.
    
    Example:
    Replicate volume with replica count 2.
    
    Backend status:
    --------------
    [root@dhcp-0-17 xattrop]# ls -lia | wc -l
    1918
    
    NOTE: Out of 1918, 2 entries are <xattrop-gfid> dummy
    entries so actual no. of entries to be healed are
    1916.
    
    [root@dhcp-0-17 xattrop]# pwd
    /home/user/2ty/.glusterfs/indices/xattrop
    
    Command output:
    --------------
    Gathering count of entries to be healed on volume volume3 has been successful
    
    Brick 192.168.122.1:/home/user/22iu
    Status: Brick is Not connected
    Entries count is not available
    
    Brick 192.168.122.1:/home/user/2ty
    Number of entries: 1916
    
    Change-Id: I72452f3de50502dc898076ec74d434d9e77fd290
    BUG: 1015990
    Signed-off-by: Venkatesh Somyajulu <vsomyaju>
    Reviewed-on: http://review.gluster.org/6044
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 5 Niels de Vos 2014-04-17 11:49:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.