Red Hat Bugzilla – Bug 1282076
cache mode must be the default mode for tiered volumes
Last modified: 2016-06-16 09:44:23 EDT
Description of problem:
Make cache mode the default for tiering
Version-Release number of selected component (if applicable):
Steps to Reproduce:
test mode is default
cache mode is default
test mode is for internal QE testing and should not be used by customers. Test mode forces all unused data to be destaged from the hot tier when no IO is received, rendering the SSD unused / wasted. If the demote cycle time is made large as a workaround, data will stay on the hot tier but will not react to changing conditions, which means unused data may fill the hot tier resource. A long cycle times also means the system will stay static even when close to full, which will put the system into a degraded mode.
The cache mode is supposed to keep data on the hot tier but also demote data quickly when the hot tier is close to full. The cycle time can be set to a desired reaction time to swap out lesser used data and bring in more frequently accessed data without flushing the entire SSD.
REVIEW: http://review.gluster.org/12581 (cluster/tier make cache mode default for tiered volumes) posted (#1) for review on master by Dan Lambright (email@example.com)
COMMIT: http://review.gluster.org/12581 committed in master by Dan Lambright (firstname.lastname@example.org)
Author: Dan Lambright <email@example.com>
Date: Sat Nov 14 14:35:26 2015 -0500
cluster/tier make cache mode default for tiered volumes
The default mode for tiered volumes must be cache. The current
test mode was for engineering and should ordinarily not be used
Signed-off-by: Dan Lambright <firstname.lastname@example.org>
Tested-by: Gluster Build System <email@example.com>
Reviewed-by: mohammed rafi kc <firstname.lastname@example.org>
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see email@example.com with any questions
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.
glusterfs-3.8.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.