Bug 1328410 - SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
Summary: SAMBA+TIER : Wrong message display.On detach tier success the message reflect...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1322695 1324439
Blocks: 1337908
TreeView+ depends on / blocked
 
Reported: 2016-04-19 10:43 UTC by hari gowtham
Modified: 2016-06-28 12:14 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.7.12
Doc Type: Bug Fix
Doc Text:
Clone Of: 1324439
: 1337908 (view as bug list)
Environment:
Last Closed: 2016-06-28 12:14:18 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-04-19 10:45:41 UTC
REVIEW: http://review.gluster.org/14030 (Tier: tier command fails message when any node is down) posted (#1) for review on release-3.7 by hari gowtham (hari.gowtham005@gmail.com)

Comment 2 Vijay Bellur 2016-04-22 15:12:43 UTC
COMMIT: http://review.gluster.org/14030 committed in release-3.7 by Dan Lambright (dlambrig@redhat.com) 
------
commit 8918c35434f5af98d63180163081b175c3236e91
Author: hari <hgowtham@redhat.com>
Date:   Wed Apr 6 16:16:47 2016 +0530

    Tier: tier command fails message when any node is down
    
            back-port of : http://review.gluster.org/#/c/13918/
    
    PROBLEM: the dict doesn't get set on the node if its down.
    so while printing the output on cli we get a ENOENT
    which ends in a tier command failed.
    
    FIX: this patch skips the node that wasn't available
    and carrys on with the next node for both tier status
    and tier detach status.
    
    >Change-Id: I718a034b18b109748ec67f3ace56540c50650d23
    >BUG: 1324439
    >Signed-off-by: hari <hgowtham@redhat.com>
    >Reviewed-on: http://review.gluster.org/13918
    >Smoke: Gluster Build System <jenkins@build.gluster.com>
    >Tested-by: hari gowtham <hari.gowtham005@gmail.com>
    >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    >CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    >Reviewed-by: Kaushal M <kaushal@redhat.com>
    
    Change-Id: Ia23df47596adb24816de4a2a1c8db875f145838e
    BUG: 1328410
    Signed-off-by: hari <hgowtham@redhat.com>
    Reviewed-on: http://review.gluster.org/14030
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    Tested-by: hari gowtham <hari.gowtham005@gmail.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Dan Lambright <dlambrig@redhat.com>

Comment 4 Vijay Bellur 2016-05-20 12:24:45 UTC
REVIEW: http://review.gluster.org/14458 (tier/cli : printing a warning instead of skipping the node) posted (#1) for review on release-3.7 by hari gowtham (hari.gowtham005@gmail.com)

Comment 5 Vijay Bellur 2016-05-20 18:19:53 UTC
COMMIT: http://review.gluster.org/14458 committed in release-3.7 by Atin Mukherjee (amukherj@redhat.com) 
------
commit 903f27305cbff51f174f2704ea13ffa65083fd24
Author: hari gowtham <hgowtham@redhat.com>
Date:   Mon May 16 10:55:17 2016 +0530

    tier/cli : printing a warning instead of skipping the node
    
            back-port of : http://review.gluster.org/#/c/14347/8
    
    Problem: skipping the status of the nodes down creates confusion
    to the user as one might see the status as completed for all nodes
    and while performing detach commit, the operation will fail as the
    node is down
    
    Fix: Display a warning message
    
    Note: When the last node is down (as per the peer list) then
    warning message can't be displayed as the total number of peers
    participating in the transaction is considered to be the total count.
    
    >Change-Id: Ib7afbd1b26df3378e4d537db06f41f5c105ad86e
    >BUG: 1324439
    >Signed-off-by: hari gowtham <hgowtham@redhat.com>
    
    Change-Id: Ie4296e932abaf163edc55b540b26dc6f5824ea85
    BUG: 1328410
    Signed-off-by: hari gowtham <hgowtham@redhat.com>
    Reviewed-on: http://review.gluster.org/14458
    Tested-by: hari gowtham <hari.gowtham005@gmail.com>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>

Comment 6 Kaushal 2016-06-28 12:14:18 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.