Bug 1229268 - Files migrated should stay on a tier for a full cycle
Summary: Files migrated should stay on a tier for a full cycle
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Joseph Elwin Fernandes
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1218717 1230857
Blocks: 1202842
TreeView+ depends on / blocked
 
Reported: 2015-06-08 10:44 UTC by Nag Pavan Chilakam
Modified: 2018-11-30 05:44 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.7.1-2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1218717
Environment:
Last Closed: 2015-07-29 04:59:24 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Nag Pavan Chilakam 2015-06-08 10:44:19 UTC
+++ This bug was initially created as a clone of Bug #1218717 +++

When a file is migrated, it should stay on the destination tier for a full "cycle", meaning it should not immediately be moved based on timing on the destination tier.

--- Additional comment from Dan Lambright on 2015-06-03 09:06:20 EDT ---

This problem occurs if two nodes start at different times. Each node should run promotion/demotion at the same time.

--- Additional comment from Anand Avati on 2015-06-06 01:17:44 EDT ---

REVIEW: http://review.gluster.org/11110 (tier/dht: Fixing non atomic promotion/demotion w.r.t to frequency period) posted (#1) for review on master by Joseph Fernandes (josferna@redhat.com)

--- Additional comment from Anand Avati on 2015-06-07 00:01:34 EDT ---

REVIEW: http://review.gluster.org/11110 (tier/dht: Fixing non atomic promotion/demotion w.r.t to frequency period) posted (#2) for review on master by Joseph Fernandes (josferna@redhat.com)

--- Additional comment from Anand Avati on 2015-06-08 05:54:46 EDT ---

REVIEW: http://review.gluster.org/11110 (tier/dht: Fixing non atomic promotion/demotion w.r.t to frequency period) posted (#3) for review on master by Joseph Fernandes (josferna@redhat.com)

Comment 3 Nag Pavan Chilakam 2015-07-06 14:22:58 UTC
Hi Joseph,
Can you please elaborate on what QE must do to verify this bug??

Comment 4 Nag Pavan Chilakam 2015-07-08 09:09:44 UTC
Fisrtly we recommend to have all systems on same time.
I have tested and found that a file stays on one tier for the complete cycle.
1)created a file, it goes to hot tier, after sometime, when cycle is completed, it moves to cold. when re-accessed, the file doesnt move to hot immediately. It moves at end of cycle

I have tested with default values ie i turned on ctr and set write/read freq to 0(i didnt set promote/demote freq, hence default should be 120s), as that was the scope for 3.1


glusterfs 3.7.1 built on Jul  2 2015 21:01:51
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@nchilaka-tier01 ~]# rpm -qa|grep gluster
gluster-nagios-common-0.2.0-1.el6rhs.noarch
glusterfs-libs-3.7.1-7.el6rhs.x86_64
glusterfs-api-3.7.1-7.el6rhs.x86_64
glusterfs-rdma-3.7.1-7.el6rhs.x86_64
glusterfs-3.7.1-7.el6rhs.x86_64
glusterfs-fuse-3.7.1-7.el6rhs.x86_64
glusterfs-cli-3.7.1-7.el6rhs.x86_64
python-gluster-3.7.1-6.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-7.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-7.el6rhs.x86_64
gluster-nagios-addons-0.2.4-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-7.el6rhs.x86_64
glusterfs-server-3.7.1-7.el6rhs.x86_64


There were issues observed for which bugs were raised. BZ#1240925,1240577,1238944

Comment 5 errata-xmlrpc 2015-07-29 04:59:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.