Bug 1229244 - Data Tiering:Data moving to the tier(inertia) where data already exists instead of moving to hot tier first by default
Summary: Data Tiering:Data moving to the tier(inertia) where data already exists inste...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Linux
urgent
urgent
Target Milestone: ---
: ---
Assignee: Dan Lambright
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1208367
Blocks: qe_tracker_everglades 1202842
TreeView+ depends on / blocked
 
Reported: 2015-06-08 10:25 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:43 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1208367
Environment:
Last Closed: 2015-10-30 12:39:46 UTC
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-06-08 10:25:26 UTC
+++ This bug was initially created as a clone of Bug #1208367 +++

Description of problem:
=======================
adding new data to a tiered volume doesnt always get added to hot tier first by default.
Suppose if it is an existing distribute volume with already data available on it and after we convert the volume to a tiered volume by attaching tier, and then add any data, the data still gets written to cold instead of hot tier, hence making the hot tier useless.
In short, new data moves to the hot tier if the volume is a complete new tier volume, else it moves to cold tier if it was already an existing regular volume with data in it and got converted to tier vol.
Note: I haven't set any ctr options
Also, there is no volume quota policies set.
I had tested with bare minimum zero sized files,hence storage space should not be an issue

Version-Release number of selected component (if applicable):
============================================================
Upstream nightly build
glusterfs 3.7dev built on Mar 31 2015 01:05:54
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.



How reproducible:
================
easily

Steps to Reproduce:
==================
1.have a regular distribute volume and add data to it
2.Now attach a tier to convert this volume to tier volume
3. Now again add data to the tiered volume.
It can be observed that the new data is not getting written into new hot tier but still getting added to the cold tier.


Expected results:
================
Any new data should get added to the hot tier

--- Additional comment from Joseph Elwin Fernandes on 2015-04-14 07:48:41 EDT ---

I couldn't reproduce this bug with the latest upstream.

Distribute:
~~~~~~~~~~~~~~~
Step 1: Created a distribute volume
step 2: mounted using fuse and created 300 files.
step 3: all the files are in the volume bricks
step 4: attached a distribute tier
step 5: created new 300 files
step 6: all new files are on the hot-tier bricks


Distribute-Replica:
~~~~~~~~~~~~~~~~~~
Step 1: Created a distribute-replica 2 volume
step 2: mounted using fuse and created 300 files.
step 3: all the files are in the volume bricks
step 4: attached a distribute-replica 2 tier
step 5: created new 300 files
step 6: all new files are on the hot-tier bricks

--- Additional comment from Niels de Vos on 2015-05-15 09:07:41 EDT ---

This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.

Comment 3 Triveni Rao 2015-06-11 19:28:47 UTC
i could re-produce this problem on the below build:

[root@rhsqa14-vm1 ~]# glusterfs --version
glusterfs 3.7.1 built on Jun  9 2015 02:31:54
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@rhsqa14-vm1 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-1.el6rhs.x86_64
glusterfs-cli-3.7.1-1.el6rhs.x86_64
glusterfs-libs-3.7.1-1.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-1.el6rhs.x86_64
glusterfs-fuse-3.7.1-1.el6rhs.x86_64
glusterfs-server-3.7.1-1.el6rhs.x86_64
glusterfs-api-3.7.1-1.el6rhs.x86_64
[root@rhsqa14-vm1 ~]# 



[root@rhsqa14-vm1 ~]# gluster v info
 
Volume Name: earth
Type: Tier
Volume ID: 0612ca5f-6b81-4e3f-bd3c-4915dcc9fb33
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: 10.70.47.163:/rhs/brick3/m0
Brick2: 10.70.47.165:/rhs/brick3/m0
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick3: 10.70.47.165:/rhs/brick1/m0
Brick4: 10.70.47.163:/rhs/brick1/m0
Brick5: 10.70.47.165:/rhs/brick2/m0
Brick6: 10.70.47.163:/rhs/brick2/m0
Options Reconfigured:
features.ctr-enabled: on
features.record-counters: on
cluster.tier-demote-frequency: 10
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]#




[root@rhsqa14-vm1 ~]# ls -la /rhs/brick1/*
total 0
drwxr-xr-x. 5 root root  74 Jun 11 15:06 .
drwxr-xr-x. 3 root root  15 Jun 11 15:04 ..
drw-------. 9 root root 119 Jun 11 15:11 .glusterfs
drwxr-xr-x. 2 root root   6 Jun 11 15:07 testing
drwxr-xr-x. 3 root root  24 Jun 11 15:06 .trashcan
-rw-r--r--. 2 root root   0 Jun 11 15:06 tri
-rw-r--r--. 2 root root   0 Jun 11 15:06 tri2
[root@rhsqa14-vm1 ~]# ls -la /rhs/brick2/*
total 0
drwxr-xr-x.  5 root root  91 Jun 11 15:11 .
drwxr-xr-x.  3 root root  15 Jun 11 15:04 ..
drw-------. 10 root root 128 Jun 11 15:11 .glusterfs
-rw-r--r--.  2 root root   0 Jun 11 15:10 move_toHT
drwxr-xr-x.  2 root root   6 Jun 11 15:07 testing
drwxr-xr-x.  3 root root  24 Jun 11 15:06 .trashcan
-rw-r--r--.  2 root root   0 Jun 11 15:06 tri1
-rw-r--r--.  2 root root   0 Jun 11 15:06 tri3
[root@rhsqa14-vm1 ~]# ls -la /rhs/brick3/*
total 0
drwxr-xr-x.  5 root root  96 Jun 11 15:11 .
drwxr-xr-x.  3 root root  15 Jun 11 15:06 ..
drw-------. 12 root root 146 Jun 11 15:11 .glusterfs
drwxr-xr-x.  2 root root   6 Jun 11 15:07 testing
drwxr-xr-x.  3 root root  24 Jun 11 15:06 .trashcan
---------T.  2 root root   0 Jun 11 15:10 tri
---------T.  2 root root   0 Jun 11 15:10 tri1
---------T.  2 root root   0 Jun 11 15:10 tri2
---------T.  2 root root   0 Jun 11 15:10 tri3
[root@rhsqa14-vm1 ~]#


[root@rhsqa14-vm1 ~]# ls -la /rhs/brick3/*
total 0
drwxr-xr-x.  5 root root  96 Jun 11 15:16 .
drwxr-xr-x.  3 root root  15 Jun 11 15:06 ..
drw-------. 13 root root 155 Jun 11 15:16 .glusterfs
drwxr-xr-x.  2 root root   6 Jun 11 15:07 testing
drwxr-xr-x.  3 root root  24 Jun 11 15:06 .trashcan
---------T.  2 root root   0 Jun 11 15:10 tri
---------T.  2 root root   0 Jun 11 15:10 tri1
---------T.  2 root root   0 Jun 11 15:10 tri2
---------T.  2 root root   0 Jun 11 15:10 tri3
[root@rhsqa14-vm1 ~]# ls -la /rhs/brick2/*
total 0
drwxr-xr-x.  5 root root  91 Jun 11 15:11 .
drwxr-xr-x.  3 root root  15 Jun 11 15:04 ..
drw-------. 10 root root 128 Jun 11 15:16 .glusterfs
-rw-r--r--.  2 root root   0 Jun 11 15:10 move_toHT
drwxr-xr-x.  2 root root   6 Jun 11 15:07 testing
drwxr-xr-x.  3 root root  24 Jun 11 15:06 .trashcan
-rw-r--r--.  2 root root   0 Jun 11 15:06 tri1
-rw-r--r--.  2 root root   0 Jun 11 15:06 tri3
[root@rhsqa14-vm1 ~]# ls -la /rhs/brick1/*
total 0
drwxr-xr-x.  5 root root  98 Jun 11 15:16 .
drwxr-xr-x.  3 root root  15 Jun 11 15:04 ..
drw-------. 10 root root 128 Jun 11 15:17 .glusterfs
-rw-r--r--.  2 root root   0 Jun 11 15:16 once_more_move_HT
drwxr-xr-x.  2 root root   6 Jun 11 15:07 testing
drwxr-xr-x.  3 root root  24 Jun 11 15:06 .trashcan
-rw-r--r--.  2 root root   0 Jun 11 15:06 tri
-rw-r--r--.  2 root root   0 Jun 11 15:06 tri2
[root@rhsqa14-vm1 ~]#


Note You need to log in before you can comment on or make changes to this bug.