Bug 1328410

Summary: SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
Product: [Community] GlusterFS Reporter: hari gowtham <hgowtham>
Component: tieringAssignee: hari gowtham <hgowtham>
Status: CLOSED CURRENTRELEASE QA Contact: bugs <bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.7.11CC: bugs, dlambrig, kaushal, kramdoss, nchilaka, rhinduja, rhs-smb, sankarshan, sashinde, vdas
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.12 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1324439
: 1337908 (view as bug list) Environment:
Last Closed: 2016-06-28 12:14:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1322695, 1324439    
Bug Blocks: 1337908    

Comment 1 Vijay Bellur 2016-04-19 10:45:41 UTC
REVIEW: http://review.gluster.org/14030 (Tier: tier command fails message when any node is down) posted (#1) for review on release-3.7 by hari gowtham (hari.gowtham005)

Comment 2 Vijay Bellur 2016-04-22 15:12:43 UTC
COMMIT: http://review.gluster.org/14030 committed in release-3.7 by Dan Lambright (dlambrig) 
------
commit 8918c35434f5af98d63180163081b175c3236e91
Author: hari <hgowtham>
Date:   Wed Apr 6 16:16:47 2016 +0530

    Tier: tier command fails message when any node is down
    
            back-port of : http://review.gluster.org/#/c/13918/
    
    PROBLEM: the dict doesn't get set on the node if its down.
    so while printing the output on cli we get a ENOENT
    which ends in a tier command failed.
    
    FIX: this patch skips the node that wasn't available
    and carrys on with the next node for both tier status
    and tier detach status.
    
    >Change-Id: I718a034b18b109748ec67f3ace56540c50650d23
    >BUG: 1324439
    >Signed-off-by: hari <hgowtham>
    >Reviewed-on: http://review.gluster.org/13918
    >Smoke: Gluster Build System <jenkins.com>
    >Tested-by: hari gowtham <hari.gowtham005>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.com>
    >Reviewed-by: Kaushal M <kaushal>
    
    Change-Id: Ia23df47596adb24816de4a2a1c8db875f145838e
    BUG: 1328410
    Signed-off-by: hari <hgowtham>
    Reviewed-on: http://review.gluster.org/14030
    Smoke: Gluster Build System <jenkins.com>
    Tested-by: hari gowtham <hari.gowtham005>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Dan Lambright <dlambrig>

Comment 4 Vijay Bellur 2016-05-20 12:24:45 UTC
REVIEW: http://review.gluster.org/14458 (tier/cli : printing a warning instead of skipping the node) posted (#1) for review on release-3.7 by hari gowtham (hari.gowtham005)

Comment 5 Vijay Bellur 2016-05-20 18:19:53 UTC
COMMIT: http://review.gluster.org/14458 committed in release-3.7 by Atin Mukherjee (amukherj) 
------
commit 903f27305cbff51f174f2704ea13ffa65083fd24
Author: hari gowtham <hgowtham>
Date:   Mon May 16 10:55:17 2016 +0530

    tier/cli : printing a warning instead of skipping the node
    
            back-port of : http://review.gluster.org/#/c/14347/8
    
    Problem: skipping the status of the nodes down creates confusion
    to the user as one might see the status as completed for all nodes
    and while performing detach commit, the operation will fail as the
    node is down
    
    Fix: Display a warning message
    
    Note: When the last node is down (as per the peer list) then
    warning message can't be displayed as the total number of peers
    participating in the transaction is considered to be the total count.
    
    >Change-Id: Ib7afbd1b26df3378e4d537db06f41f5c105ad86e
    >BUG: 1324439
    >Signed-off-by: hari gowtham <hgowtham>
    
    Change-Id: Ie4296e932abaf163edc55b540b26dc6f5824ea85
    BUG: 1328410
    Signed-off-by: hari gowtham <hgowtham>
    Reviewed-on: http://review.gluster.org/14458
    Tested-by: hari gowtham <hari.gowtham005>
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 6 Kaushal 2016-06-28 12:14:18 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user