+++ This bug was initially created as a clone of Bug #1218717 +++ When a file is migrated, it should stay on the destination tier for a full "cycle", meaning it should not immediately be moved based on timing on the destination tier. --- Additional comment from Dan Lambright on 2015-06-03 09:06:20 EDT --- This problem occurs if two nodes start at different times. Each node should run promotion/demotion at the same time. --- Additional comment from Anand Avati on 2015-06-06 01:17:44 EDT --- REVIEW: http://review.gluster.org/11110 (tier/dht: Fixing non atomic promotion/demotion w.r.t to frequency period) posted (#1) for review on master by Joseph Fernandes (josferna) --- Additional comment from Anand Avati on 2015-06-07 00:01:34 EDT --- REVIEW: http://review.gluster.org/11110 (tier/dht: Fixing non atomic promotion/demotion w.r.t to frequency period) posted (#2) for review on master by Joseph Fernandes (josferna) --- Additional comment from Anand Avati on 2015-06-08 05:54:46 EDT --- REVIEW: http://review.gluster.org/11110 (tier/dht: Fixing non atomic promotion/demotion w.r.t to frequency period) posted (#3) for review on master by Joseph Fernandes (josferna)
Hi Joseph, Can you please elaborate on what QE must do to verify this bug??
Fisrtly we recommend to have all systems on same time. I have tested and found that a file stays on one tier for the complete cycle. 1)created a file, it goes to hot tier, after sometime, when cycle is completed, it moves to cold. when re-accessed, the file doesnt move to hot immediately. It moves at end of cycle I have tested with default values ie i turned on ctr and set write/read freq to 0(i didnt set promote/demote freq, hence default should be 120s), as that was the scope for 3.1 glusterfs 3.7.1 built on Jul 2 2015 21:01:51 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@nchilaka-tier01 ~]# rpm -qa|grep gluster gluster-nagios-common-0.2.0-1.el6rhs.noarch glusterfs-libs-3.7.1-7.el6rhs.x86_64 glusterfs-api-3.7.1-7.el6rhs.x86_64 glusterfs-rdma-3.7.1-7.el6rhs.x86_64 glusterfs-3.7.1-7.el6rhs.x86_64 glusterfs-fuse-3.7.1-7.el6rhs.x86_64 glusterfs-cli-3.7.1-7.el6rhs.x86_64 python-gluster-3.7.1-6.el6rhs.x86_64 glusterfs-geo-replication-3.7.1-7.el6rhs.x86_64 glusterfs-debuginfo-3.7.1-7.el6rhs.x86_64 gluster-nagios-addons-0.2.4-2.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-7.el6rhs.x86_64 glusterfs-server-3.7.1-7.el6rhs.x86_64 There were issues observed for which bugs were raised. BZ#1240925,1240577,1238944
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html