Bug 1283410

Summary: cache mode must be the default mode for tiered volumes
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Dan Lambright <dlambrig>
Component: tierAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED ERRATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: asrivast, byarlaga, rcyriac, rhs-bugs, sankarshan, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.5-7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1282076 Environment:
Last Closed: 2016-03-01 05:55:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1282076    
Bug Blocks: 1260783, 1283288    

Description Dan Lambright 2015-11-18 21:40:42 UTC
+++ This bug was initially created as a clone of Bug #1282076 +++

Description of problem:

Make cache mode the default for tiering

Version-Release number of selected component (if applicable):


How reproducible:

always

Steps to Reproduce:
1. 
2. 
3.

Actual results:

test mode is default

Expected results:

cache mode is default

Additional info:

test mode is for internal QE testing and should not be used by customers. Test mode forces all unused data to be destaged from the hot tier when no IO is received, rendering the SSD unused / wasted. If the demote cycle time is made large as a workaround, data will stay on the hot tier but will not react to changing conditions, which means unused data may fill the hot tier resource. A long cycle times also means the system will stay static even when close to full, which will put the system into a degraded mode. 

The cache mode is supposed to keep data on the hot tier but also demote data quickly when the hot tier is close to full. The cycle time can be set to a desired reaction time to swap out lesser used data and bring in more frequently accessed data without flushing the entire SSD.

--- Additional comment from Vijay Bellur on 2015-11-14 14:57:34 EST ---

REVIEW: http://review.gluster.org/12581 (cluster/tier make cache mode default for tiered volumes) posted (#1) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Vijay Bellur on 2015-11-17 07:18:33 EST ---

COMMIT: http://review.gluster.org/12581 committed in master by Dan Lambright (dlambrig) 
------
commit dcd1ff344d242f64f3a5c579df97a050736e6633
Author: Dan Lambright <dlambrig>
Date:   Sat Nov 14 14:35:26 2015 -0500

    cluster/tier make cache mode default for tiered volumes
    
    The default mode for tiered volumes must be cache. The current
    test mode was for engineering and should ordinarily not be used
    by customers.
    
    Change-Id: I20583f54a9269ce75daade645be18ab8575b0b9b
    BUG: 1282076
    Signed-off-by: Dan Lambright <dlambrig>
    Reviewed-on: http://review.gluster.org/12581
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: mohammed rafi  kc <rkavunga>

Comment 3 Nag Pavan Chilakam 2015-11-25 10:35:10 UTC
With 3.7.5-7 build. Cache mode is the default now.
vol info shows as below:
[root@zod ~]# gluster v info metadata
 
Volume Name: metadata
Type: Tier
Volume ID: 23dcc876-5343-4ab9-b320-844e30e4a67c
Status: Started
Number of Bricks: 16
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: yarrow:/rhs/brick6/metadata_hot
Brick2: zod:/rhs/brick6/metadata_hot
Brick3: yarrow:/rhs/brick7/metadata_hot
Brick4: zod:/rhs/brick7/metadata_hot
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (4 + 2) = 12
Brick5: zod:/rhs/brick1/metadata
Brick6: yarrow:/rhs/brick1/metadata
Brick7: zod:/rhs/brick2/metadata
Brick8: yarrow:/rhs/brick2/metadata
Brick9: zod:/rhs/brick3/metadata
Brick10: yarrow:/rhs/brick3/metadata
Brick11: zod:/rhs/brick4/metadata
Brick12: yarrow:/rhs/brick4/metadata
Brick13: zod:/rhs/brick5/metadata
Brick14: yarrow:/rhs/brick5/metadata
Brick15: yarrow:/rhs/brick6/metadata
Brick16: zod:/rhs/brick6/metadata
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
performance.readdir-ahead: on







Also, file promotes and demotes are happening(sanity check)only when the watermark criteria is hit.

changed the mode to test and test mode works as previous default behavior

Hence moving to verified

Comment 5 errata-xmlrpc 2016-03-01 05:55:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html