Bug 1325792 - "gluster vol heal test statistics heal-count replica" seems doesn't work
Summary: "gluster vol heal test statistics heal-count replica" seems doesn't work
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-11 08:28 UTC by jiademing.dd
Modified: 2017-03-06 17:20 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.10.0
Clone Of:
Environment:
Last Closed: 2017-03-06 17:20:01 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description jiademing.dd 2016-04-11 08:28:18 UTC
Description of problem:
I create a replica 2 volume on one node, like this:
Volume Name: test
Type: Distributed-Replicate
Volume ID: 7eca4759-2ffd-4970-aab8-804bb4daaca1
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: node-1:/disk1
Brick2: node-1:/disk2
Brick3: node-1:/disk3
Brick4: node-1:/disk4
Brick5: node-1:/disk5
Brick6: node-1:/disk6
Brick7: node-1:/disk7
Brick8: node-1:/disk8
Options Reconfigured:
performance.readdir-ahead: on
transport.address-family: inet

when I perform "gluster vol heal test statistics heal-count replica node-1:/disk[1......8]",only Show the first set of results(test-replica-0):

root@node-1:~/coding/glusterfs# gluster vol heal test statistics heal-count replica node-1:/disk5
Gathering count of entries to be healed per replica on volume test has been successful 

Brick node-1:/disk1
Number of entries: 0

Brick node-1:/disk2
Number of entries: 0

Brick node-1:/disk3
No gathered input for this brick

Brick node-1:/disk4
No gathered input for this brick

Brick node-1:/disk5
No gathered input for this brick

Brick node-1:/disk6
No gathered input for this brick

Brick node-1:/disk7
No gathered input for this brick

Brick node-1:/disk8
No gathered input for this brick

Is this a bug?

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 jiademing.dd 2016-04-12 07:33:41 UTC
int
_select_hxlator_with_matching_brick (xlator_t *this,
                                     glusterd_volinfo_t *volinfo, dict_t *dict,
                                     int *index)
{
        char                    *hostname = NULL;
        char                    *path = NULL;
        glusterd_brickinfo_t    *brickinfo = NULL;
        glusterd_conf_t         *priv   = NULL;
        int                     hxl_children = 0;

        priv = this->private;
        if (!dict ||
            dict_get_str (dict, "per-replica-cmd-hostname", &hostname) ||
            dict_get_str (dict, "per-replica-cmd-path", &path))
                return -1;

        hxl_children = _get_hxl_children_count (volinfo);
        if ((*index) == 0)
                (*index)++;

        cds_list_for_each_entry (brickinfo, &volinfo->bricks, brick_list) {
                if (gf_uuid_is_null (brickinfo->uuid))
                        (void)glusterd_resolve_brick (brickinfo);

                if (!gf_uuid_compare (MY_UUID, brickinfo->uuid)) {
                        _add_hxlator_to_dict (dict, volinfo,
                                              ((*index) - 1)/hxl_children, 0);
                        return 1;
                }
                (*index)++;
        }

        return 0;
}

_select_hxlator_with_matching_brick() will get hostname and path, but not use.
so match error ?

Comment 2 Ravishankar N 2016-07-05 09:45:49 UTC
(In reply to jiademing.dd from comment #0)
> Description of problem:
> I create a replica 2 volume on one node, like this:
> Volume Name: test
> Type: Distributed-Replicate
> Volume ID: 7eca4759-2ffd-4970-aab8-804bb4daaca1
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp
> Bricks:
> Brick1: node-1:/disk1
> Brick2: node-1:/disk2
> Brick3: node-1:/disk3
> Brick4: node-1:/disk4
> Brick5: node-1:/disk5
> Brick6: node-1:/disk6
> Brick7: node-1:/disk7
> Brick8: node-1:/disk8
> Options Reconfigured:
> performance.readdir-ahead: on
> transport.address-family: inet
> 
> when I perform "gluster vol heal test statistics heal-count replica
> node-1:/disk[1......8]",only Show the first set of results(test-replica-0):
> 
> root@node-1:~/coding/glusterfs# gluster vol heal test statistics heal-count
> replica node-1:/disk5
> Gathering count of entries to be healed per replica on volume test has been
> successful 
> 
> Brick node-1:/disk1
> Number of entries: 0
> 
> Brick node-1:/disk2
> Number of entries: 0
> 
> Brick node-1:/disk3
> No gathered input for this brick
> 
> Brick node-1:/disk4
> No gathered input for this brick
> 
> Brick node-1:/disk5
> No gathered input for this brick
> 
> Brick node-1:/disk6
> No gathered input for this brick
> 
> Brick node-1:/disk7
> No gathered input for this brick
> 
> Brick node-1:/disk8
> No gathered input for this brick
> 
> Is this a bug?
> 
> 

Hi jiademing,
This is how the current implementation is.
See the last section of https://github.com/gluster/glusterfs-specs/blob/master/done/Features/afr-statistics.md , The command prints the values only for the replica pair given and for others, it prints "No gathered input for this brick".

What we could do is print the number of entries only for the bricks of the replica given in the command and not print anything for the other bricks. Is that acceptable?

Comment 3 jiademing.dd 2016-07-07 11:01:23 UTC
(In reply to Ravishankar N from comment #2)
> (In reply to jiademing.dd from comment #0)
> > Description of problem:
> > I create a replica 2 volume on one node, like this:
> > Volume Name: test
> > Type: Distributed-Replicate
> > Volume ID: 7eca4759-2ffd-4970-aab8-804bb4daaca1
> > Status: Started
> > Number of Bricks: 4 x 2 = 8
> > Transport-type: tcp
> > Bricks:
> > Brick1: node-1:/disk1
> > Brick2: node-1:/disk2
> > Brick3: node-1:/disk3
> > Brick4: node-1:/disk4
> > Brick5: node-1:/disk5
> > Brick6: node-1:/disk6
> > Brick7: node-1:/disk7
> > Brick8: node-1:/disk8
> > Options Reconfigured:
> > performance.readdir-ahead: on
> > transport.address-family: inet
> > 
> > when I perform "gluster vol heal test statistics heal-count replica
> > node-1:/disk[1......8]",only Show the first set of results(test-replica-0):
> > 
> > root@node-1:~/coding/glusterfs# gluster vol heal test statistics heal-count
> > replica node-1:/disk5
> > Gathering count of entries to be healed per replica on volume test has been
> > successful 
> > 
> > Brick node-1:/disk1
> > Number of entries: 0
> > 
> > Brick node-1:/disk2
> > Number of entries: 0
> > 
> > Brick node-1:/disk3
> > No gathered input for this brick
> > 
> > Brick node-1:/disk4
> > No gathered input for this brick
> > 
> > Brick node-1:/disk5
> > No gathered input for this brick
> > 
> > Brick node-1:/disk6
> > No gathered input for this brick
> > 
> > Brick node-1:/disk7
> > No gathered input for this brick
> > 
> > Brick node-1:/disk8
> > No gathered input for this brick
> > 
> > Is this a bug?
> > 
> > 
> 
> Hi jiademing,
> This is how the current implementation is.
> See the last section of
> https://github.com/gluster/glusterfs-specs/blob/master/done/Features/afr-
> statistics.md , The command prints the values only for the replica pair
> given and for others, it prints "No gathered input for this brick".
> 
> What we could do is print the number of entries only for the bricks of the
> replica given in the command and not print anything for the other bricks. Is
> that acceptable?

Thank you for your reply, I know, but the problem is: I give other bricks not first replica pair,that command print the first replica pair (test-replica-0)'s info every time.

Like this:
gluster vol heal test statistics heal-count replica node-1:/disk5

 Brick node-1:/disk1
 Number of entries: 0
 
 Brick node-1:/disk2
 Number of entries: 0
 
 Brick node-1:/disk3
 No gathered input for this brick
 
 Brick node-1:/disk4
 No gathered input for this brick
 
 Brick node-1:/disk5
 No gathered input for this brick
 
 Brick node-1:/disk6
 No gathered input for this brick
 
 Brick node-1:/disk7
 No gathered input for this brick
 
 Brick node-1:/disk8
 No gathered input for this brick
it should print disk5 and disk6's entries,but print disk1 and disk2 every time.

Comment 4 Ravishankar N 2016-07-08 12:16:39 UTC
Thanks for the clarification Jiademing. You're right about _select_hxlator_with_matching_brick(), this seems to be a regression introduced by http://review.gluster.org/#/c/9793/4/xlators/mgmt/glusterd/src/glusterd-op-sm.c.

Feel free to send a patch if you like, re-adding the ALL_REPLICA and PER_REPLICA checks.

Comment 5 jiademing.dd 2016-07-14 09:49:00 UTC
(In reply to Ravishankar N from comment #4)
> Thanks for the clarification Jiademing. You're right about
> _select_hxlator_with_matching_brick(), this seems to be a regression
> introduced by
> http://review.gluster.org/#/c/9793/4/xlators/mgmt/glusterd/src/glusterd-op-
> sm.c.
> 
> Feel free to send a patch if you like, re-adding the ALL_REPLICA and
> PER_REPLICA checks.

Ha, I haven't submit any patch before,for not familiar with the process.I will try  to, Or you submitted and then told me.Thank you.

Comment 6 Worker Ant 2016-09-13 12:58:50 UTC
REVIEW: http://review.gluster.org/15494 (glusterd: "gluster v heal test statistics heal-count replica" output is not correct) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 7 Worker Ant 2016-09-14 05:38:59 UTC
REVIEW: http://review.gluster.org/15494 (glusterd: "gluster v heal test statistics heal-count replica" output is not correct) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 8 Worker Ant 2016-09-19 05:35:08 UTC
REVIEW: http://review.gluster.org/15494 (glusterd: "gluster v heal test statistics heal-count replica" output is not correct) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 9 Worker Ant 2016-09-19 06:11:17 UTC
REVIEW: http://review.gluster.org/15494 (glusterd: "gluster v heal test statistics heal-count replica" output is not correct) posted (#4) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 10 Worker Ant 2016-09-19 15:14:56 UTC
REVIEW: http://review.gluster.org/15494 (glusterd: "gluster v heal test statistics heal-count replica" output is not correct) posted (#5) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 11 Worker Ant 2016-09-26 17:30:44 UTC
COMMIT: http://review.gluster.org/15494 committed in master by Atin Mukherjee (amukherj) 
------
commit 7a80b6128ad91c1174a79b4fa6a0340dfd0b6d6b
Author: Mohit Agrawal <moagrawa>
Date:   Tue Sep 13 18:27:45 2016 +0530

    glusterd: "gluster v heal test statistics heal-count replica" output is not correct
    
    Problem :  "gluster v heal test statistcs heal-count replica" does not
                show correct output.
    
    Solution: After update condition (match brick name) in
              _select_hxlator_with_matching_brick, it shows correct output.
    
    BUG: 1325792
    Change-Id: I60cc7c68ea70bce267a747570f91dcddbc1d9016
    Signed-off-by: Mohit Agrawal <moagrawa>
    Reviewed-on: http://review.gluster.org/15494
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Ravishankar N <ravishankar>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 12 Shyamsundar 2017-03-06 17:20:01 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.