Bug 1217445 - data tiering: tiering core functionality Data heating/cooling not working on a tiered volume
Summary: data tiering: tiering core functionality Data heating/cooling not working on ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
Assignee: Joseph Elwin Fernandes
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: qe_tracker_everglades 1229264 1260923
TreeView+ depends on / blocked
 
Reported: 2015-04-30 11:45 UTC by Nag Pavan Chilakam
Modified: 2016-06-20 00:01 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1229264 (view as bug list)
Environment:
Last Closed: 2016-02-14 06:47:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-04-30 11:45:58 UTC
Description of problem:
======================
Seems like data is not getting heated or cooled after setting volume options on tiered volume.
i have tested this with both fresh tiered volume and an existing vol converted to tier.

I have set volume options as below
Options Reconfigured:
cluster.tier-demote-frequency: 10
features.ctr-enabled: 1

A new file gets created in hot tier as expected but even after 10min(even though demote is in secs) the file still remains in hot tier-doesnt get demoted. I have not accessed file in any manner in that 10min

Version-Release number of selected component (if applicable):
============================================================
[root@yarrow ~]# gluster --version
glusterfs 3.7.0alpha0 built on Apr 28 2015 01:37:11
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@yarrow ~]# rpm -qa|grep gluster
glusterfs-fuse-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-libs-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-cli-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-server-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-api-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64



Steps to Reproduce:
===================
1.create a tiered vol
2.set options like features.ctr-enabled: to on and  cluster.tier-demote-frequency and cluster.tier-promote-frequency
3.see if data is getting heated/cooled after the set times

Actual results:
===============
data not getting heated or cold

Expected results:
=================
data should get heated or cold and move b/w tiers 


[root@yarrow ~]# gluster v create tiered replica 2 yarrow:/brick_100G_10/tiered moonshine:/brick_100G_10/tiered  yarrow:/brick_100G_9/tiered moonshine:/brick_100G_9/tiered
volume create: tiered: success: please start the volume to access data
You have new mail in /var/spool/mail/root
[root@yarrow ~]# gluster v attach-tier moonshine:/ssdbricks_50G_1/tiered yarrow:/ssdbricks_50G_1/tiered
volume add-brick: failed: Volume moonshine:/ssdbricks_50G_1/tiered does not exist
[root@yarrow ~]# gluster v attach-tier tiered moonshine:/ssdbricks_50G_1/tiered yarrow:/ssdbricks_50G_1/tiered
volume add-brick: success
[root@yarrow ~]# gluster v info tiered
 
Volume Name: tiered
Type: Tier
Volume ID: 5e2977ba-c102-44ad-9b8c-bc95f988df5c
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: yarrow:/ssdbricks_50G_1/tiered
Brick2: moonshine:/ssdbricks_50G_1/tiered
Brick3: yarrow:/brick_100G_10/tiered
Brick4: moonshine:/brick_100G_10/tiered
Brick5: yarrow:/brick_100G_9/tiered
Brick6: moonshine:/brick_100G_9/tiered
[root@yarrow ~]# gluster v info tiered
 
Volume Name: tiered
Type: Tier
Volume ID: 5e2977ba-c102-44ad-9b8c-bc95f988df5c
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: yarrow:/ssdbricks_50G_1/tiered
Brick2: moonshine:/ssdbricks_50G_1/tiered
Brick3: yarrow:/brick_100G_10/tiered
Brick4: moonshine:/brick_100G_10/tiered
Brick5: yarrow:/brick_100G_9/tiered
Brick6: moonshine:/brick_100G_9/tiered
[root@yarrow ~]# gluster v set tiered features.ctr-enabled sadsa
volume set: failed: option ctr-enabled sadsa: 'sadsa' is not a valid boolean value
[root@yarrow ~]# gluster v set tiered features.ctr-enabled 1
volume set: success
[root@yarrow ~]# gluster v info tiered
 
Volume Name: tiered
Type: Tier
Volume ID: 5e2977ba-c102-44ad-9b8c-bc95f988df5c
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: yarrow:/ssdbricks_50G_1/tiered
Brick2: moonshine:/ssdbricks_50G_1/tiered
Brick3: yarrow:/brick_100G_10/tiered
Brick4: moonshine:/brick_100G_10/tiered
Brick5: yarrow:/brick_100G_9/tiered
Brick6: moonshine:/brick_100G_9/tiered
Options Reconfigured:
features.ctr-enabled: 1
[root@yarrow ~]# gluster v set tiered tier-demote-frequency 10
volume set: success
[root@yarrow ~]# gluster v info tiered
 
Volume Name: tiered
Type: Tier
Volume ID: 5e2977ba-c102-44ad-9b8c-bc95f988df5c
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: yarrow:/ssdbricks_50G_1/tiered
Brick2: moonshine:/ssdbricks_50G_1/tiered
Brick3: yarrow:/brick_100G_10/tiered
Brick4: moonshine:/brick_100G_10/tiered
Brick5: yarrow:/brick_100G_9/tiered
Brick6: moonshine:/brick_100G_9/tiered
Options Reconfigured:
cluster.tier-demote-frequency: 10
features.ctr-enabled: 1
[root@yarrow ~]# gluster v start tiered
volume start: tiered: success
[root@yarrow ~]# gluster v info tiered
 
Volume Name: tiered
Type: Tier
Volume ID: 5e2977ba-c102-44ad-9b8c-bc95f988df5c
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: yarrow:/ssdbricks_50G_1/tiered
Brick2: moonshine:/ssdbricks_50G_1/tiered
Brick3: yarrow:/brick_100G_10/tiered
Brick4: moonshine:/brick_100G_10/tiered
Brick5: yarrow:/brick_100G_9/tiered
Brick6: moonshine:/brick_100G_9/tiered
Options Reconfigured:
cluster.tier-demote-frequency: 10
features.ctr-enabled: 1
You have new mail in /var/spool/mail/root
[root@yarrow ~]# ls /*br*/tiered
/brick_100G_10/tiered:

/brick_100G_9/tiered:

/ssdbricks_50G_1/tiered:
good
[root@yarrow ~]# ls -l ls /*br*/tiered
ls: cannot access ls: No such file or directory
/brick_100G_10/tiered:
total 0

/brick_100G_9/tiered:
total 0

/ssdbricks_50G_1/tiered:
total 8
-rw-r--r--. 2 root root 15 Apr 30 17:05 good
[root@yarrow ~]# date
Thu Apr 30 17:07:00 IST 2015
[root@yarrow ~]# echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /ssdbricks_50G_1/tiered/.glusterfs/tiered.db
11b2cd42-350a-46d1-b141-f37322b225e7|1430393750|277120|0|0|0|0|0|0|1|1
11b2cd42-350a-46d1-b141-f37322b225e7|00000000-0000-0000-0000-000000000001|good|/good|0|0
[root@yarrow ~]# dae
bash: dae: command not found...
[root@yarrow ~]# date
Thu Apr 30 17:07:55 IST 2015
[root@yarrow ~]# ls -l ls /*br*/tiered
ls: cannot access ls: No such file or directory
/brick_100G_10/tiered:
total 0

/brick_100G_9/tiered:
total 0

/ssdbricks_50G_1/tiered:
total 8
-rw-r--r--. 2 root root 15 Apr 30 17:05 good
You have new mail in /var/spool/mail/root
[root@yarrow ~]# echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /ssdbricks_50G_1/tiered/.glusterfs/tiered.db
11b2cd42-350a-46d1-b141-f37322b225e7|1430393750|277120|0|0|0|0|0|0|1|1
11b2cd42-350a-46d1-b141-f37322b225e7|00000000-0000-0000-0000-000000000001|good|/good|0|0
[root@yarrow ~]# gluster --version
glusterfs 3.7.0alpha0 built on Apr 28 2015 01:37:11
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@yarrow ~]# rpm -qa|grep gluster
glusterfs-fuse-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-libs-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-cli-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-server-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
glusterfs-api-3.7.0alpha0-0.17.gited96153.el7.centos.x86_64
[root@yarrow ~]# gluster v info
 
Volume Name: newvol
Type: Tier
Volume ID: afa217fd-d255-4343-9ee7-b932d2c2bd47
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: moonshine:/ssdbricks_50G_1/newvoll
Brick2: yarrow:/ssdbricks_50G_1/newvoll
Brick3: moonshine:/brick_100G_10/newvol
Brick4: yarrow:/brick_100G_10/newvol
Brick5: moonshine:/brick_100G_9/newvol
Brick6: yarrow:/brick_100G_9/newvol
Options Reconfigured:
cluster.tier-demote-frequency: 1
features.ctr-enabled: on
 
Volume Name: tiered
Type: Tier
Volume ID: 5e2977ba-c102-44ad-9b8c-bc95f988df5c
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: yarrow:/ssdbricks_50G_1/tiered
Brick2: moonshine:/ssdbricks_50G_1/tiered
Brick3: yarrow:/brick_100G_10/tiered
Brick4: moonshine:/brick_100G_10/tiered
Brick5: yarrow:/brick_100G_9/tiered
Brick6: moonshine:/brick_100G_9/tiered
Options Reconfigured:
cluster.tier-demote-frequency: 10
features.ctr-enabled: 1
[root@yarrow ~]# echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /ssdbricks_50G_1/tiered/.glusterfs/tiered.db



[root@moonshine newvol]# ls /*br*/tiered
/brick_100G_10/tiered:

/brick_100G_9/tiered:

/ssdbricks_50G_1/tiered:
You have new mail in /var/spool/mail/root
[root@moonshine newvol]# echo "select * from gf_file_tb; select * from gf_flink_tb;" | sqlite3 /ssdbricks_50G_1/tiered/.glusterfs/tiered.db
[root@moonshine newvol]# 


Additional info:

Comment 1 Joseph Elwin Fernandes 2015-04-30 13:10:05 UTC
You need to have another volume set option called 

gluster volume set <volume_name> record-counters on

This will make CTR xlator start counting frequency counters.

Comment 2 Nag Pavan Chilakam 2015-05-04 09:28:51 UTC
i have tried this...but doesnt still work....
assigning it back.
you can have a look at machine moonshine.lab.eng.blr.redhat.com .
vol name is "tiered"

Comment 3 Joseph Elwin Fernandes 2015-05-12 05:45:56 UTC
This is fixed and merged in master and 3.7

Comment 4 Niels de Vos 2015-05-15 13:07:45 UTC
This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.


Note You need to log in before you can comment on or make changes to this bug.