Bug 1347509 - Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
Summary: Data Tiering:tier volume status shows as in-progress on all nodes of a cluste...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.8.0
Hardware: Unspecified
OS: Unspecified
high
low
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1283957 1315666
Blocks: 1268895 1316808
TreeView+ depends on / blocked
 
Reported: 2016-06-17 06:48 UTC by hari gowtham
Modified: 2016-07-08 14:43 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1315666
Environment:
Last Closed: 2016-07-08 14:43:30 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-06-17 06:49:52 UTC
REVIEW: http://review.gluster.org/14749 (Tier/glusterd: Resetting the tier status value to not started) posted (#1) for review on release-3.8 by hari gowtham (hari.gowtham005)

Comment 2 Vijay Bellur 2016-06-20 04:09:10 UTC
COMMIT: http://review.gluster.org/14749 committed in release-3.8 by Atin Mukherjee (amukherj) 
------
commit 362f90b5612c0f38894684d1d6f3bd66a31fe5b1
Author: hari <hgowtham>
Date:   Thu Apr 28 19:36:25 2016 +0530

    Tier/glusterd: Resetting the tier status value to not started
    
            back-port of : http://review.gluster.org/#/c/14106/
            back-port of : http://review.gluster.org/#/c/14229/
    
    Problem: during a volume restart or tier start force, the
    value of tier status is set as started irrespective of the result.
    
    Fix: The appropriate value of status is set during the restart of
    rebalance function.
    
    >Change-Id: I6164f0add48542a57dee059e80fa0f9bb036dbef
    >BUG: 1315666
    >Signed-off-by: hari <hgowtham>
    
    >Change-Id: Ie4345bd7ce1d458574e36b70fe8994b3d758396a
    >BUG: 1316808
    >Signed-off-by: hari <hgowtham>
    >Reviewed-on: http://review.gluster.org/14229
    >Smoke: Gluster Build System <jenkins.com>
    >Tested-by: hari gowtham <hari.gowtham005>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.com>
    >Reviewed-by: Atin Mukherjee <amukherj>
    
    Change-Id: I8e8e0662535c9dbe09eb6c7078422b40c218b473
    BUG: 1347509
    Signed-off-by: hari gowtham <hgowtham>
    Reviewed-on: http://review.gluster.org/14749
    Tested-by: hari gowtham <hari.gowtham005>
    Reviewed-by: Atin Mukherjee <amukherj>
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 3 Niels de Vos 2016-07-08 14:43:30 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.1, please open a new bug report.

glusterfs-3.8.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/156
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.