Bug 1283288 - cache mode must be the default mode for tiered volumes
cache mode must be the default mode for tiered volumes
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.6
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: bugs@gluster.org
bugs@gluster.org
:
Depends On: 1282076 1283410
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-18 10:37 EST by Dan Lambright
Modified: 2016-04-19 03:48 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.7.7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1282076
Environment:
Last Closed: 2016-04-19 03:48:31 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dan Lambright 2015-11-18 10:37:29 EST
+++ This bug was initially created as a clone of Bug #1282076 +++

Description of problem:

Make cache mode the default for tiering

Version-Release number of selected component (if applicable):


How reproducible:

always

Steps to Reproduce:
1. 
2. 
3.

Actual results:

test mode is default

Expected results:

cache mode is default

Additional info:

test mode is for internal QE testing and should not be used by customers. Test mode forces all unused data to be destaged from the hot tier when no IO is received, rendering the SSD unused / wasted. If the demote cycle time is made large as a workaround, data will stay on the hot tier but will not react to changing conditions, which means unused data may fill the hot tier resource. A long cycle times also means the system will stay static even when close to full, which will put the system into a degraded mode. 

The cache mode is supposed to keep data on the hot tier but also demote data quickly when the hot tier is close to full. The cycle time can be set to a desired reaction time to swap out lesser used data and bring in more frequently accessed data without flushing the entire SSD.

--- Additional comment from Vijay Bellur on 2015-11-14 14:57:34 EST ---

REVIEW: http://review.gluster.org/12581 (cluster/tier make cache mode default for tiered volumes) posted (#1) for review on master by Dan Lambright (dlambrig@redhat.com)

--- Additional comment from Vijay Bellur on 2015-11-17 07:18:33 EST ---

COMMIT: http://review.gluster.org/12581 committed in master by Dan Lambright (dlambrig@redhat.com) 
------
commit dcd1ff344d242f64f3a5c579df97a050736e6633
Author: Dan Lambright <dlambrig@redhat.com>
Date:   Sat Nov 14 14:35:26 2015 -0500

    cluster/tier make cache mode default for tiered volumes
    
    The default mode for tiered volumes must be cache. The current
    test mode was for engineering and should ordinarily not be used
    by customers.
    
    Change-Id: I20583f54a9269ce75daade645be18ab8575b0b9b
    BUG: 1282076
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    Reviewed-on: http://review.gluster.org/12581
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: mohammed rafi  kc <rkavunga@redhat.com>
Comment 1 Vijay Bellur 2015-11-18 13:01:12 EST
REVIEW: http://review.gluster.org/12647 (cluster/tier make cache mode default for tiered volumes) posted (#1) for review on release-3.7 by Dan Lambright (dlambrig@redhat.com)
Comment 2 Vijay Bellur 2015-11-18 13:03:51 EST
REVIEW: http://review.gluster.org/12647 (cluster/tier make cache mode default for tiered volumes) posted (#2) for review on release-3.7 by Dan Lambright (dlambrig@redhat.com)
Comment 3 Vijay Bellur 2015-11-19 11:48:42 EST
COMMIT: http://review.gluster.org/12647 committed in release-3.7 by Dan Lambright (dlambrig@redhat.com) 
------
commit e24fb2278bc0b0da88ec8c7b2d873c3e4a864d9d
Author: Dan Lambright <dlambrig@redhat.com>
Date:   Sat Nov 14 14:35:26 2015 -0500

    cluster/tier make cache mode default for tiered volumes
    
    The default mode for tiered volumes must be cache. The current
    test mode was for engineering and should ordinarily not be used
    by customers.
    
    This is a back port of 12581
    
    > Change-Id: I20583f54a9269ce75daade645be18ab8575b0b9b
    > BUG: 1282076
    > Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    > Reviewed-on: http://review.gluster.org/12581
    > Tested-by: Gluster Build System <jenkins@build.gluster.com>
    > Reviewed-by: mohammed rafi  kc <rkavunga@redhat.com>
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    
    Change-Id: Ib2629d6d3e9b9374fddb5bc21cf068a1bcd96b9d
    BUG: 1283288
    Reviewed-on: http://review.gluster.org/12647
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Dan Lambright <dlambrig@redhat.com>
    Tested-by: Dan Lambright <dlambrig@redhat.com>
Comment 4 Mike McCune 2016-03-28 18:22:31 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 5 Kaushal 2016-04-19 03:48:31 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report.

glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.