Description of problem: ======================== On a EC cold volume, when files are promoted or demoted to/from hot tier, it seems like the tier deamon is seeing each copy or part of the file as a seperate different file. The counter atleast say this. I had 3 files on a 2 x (4 + 2) = 12 EC cold volume. When they were promoted or demoted to/from a distrep hot tier, the stats show each file is counted as 6 times, with 1 time showing the success while the other 5 registering as failure. Version-Release number of selected component (if applicable): =========================================================== [root@zod glusterfs]# rpm -qa|grep gluster glusterfs-client-xlators-3.7.4-0.33.git1d02d4b.el7.centos.x86_64 glusterfs-api-3.7.4-0.33.git1d02d4b.el7.centos.x86_64 glusterfs-fuse-3.7.4-0.33.git1d02d4b.el7.centos.x86_64 glusterfs-debuginfo-3.7.4-0.33.git1d02d4b.el7.centos.x86_64 glusterfs-3.7.4-0.33.git1d02d4b.el7.centos.x86_64 glusterfs-server-3.7.4-0.33.git1d02d4b.el7.centos.x86_64 glusterfs-cli-3.7.4-0.33.git1d02d4b.el7.centos.x86_64 ^[[Aglusterfs-libs-3.7.4-0.33.git1d02d4b.el7.centos.x86_64 [root@zod glusterfs]# gluster --version glusterfs 3.7.4 built on Sep 12 2015 01:35:35 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@zod glusterfs]# Steps to Reproduce: ==================== 1.Create a EC cold vol and distrep hot tier 2.Now set promo/demote freq after enabling ctr 3.Now create a file and wait for it to demote and then make it get promoted. It can be seen that though it is one file, each copy of the file in each brick is considered as different file and the promote/demote counters show them as failed. Below was the case where I had 3 files(compare promote/demote numbers with failures) ==================================== [root@zod glusterfs]# gluster v rebal redhat status; gluster v tier redhat status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 9 0Bytes 31 22 0 in progress 1393.00 yarrow 8 0Bytes 36 28 0 in progress 1393.00 volume rebalance: redhat: success: Node Promoted files Demoted files Status --------- --------- --------- --------- localhost 9 0 in progress yarrow 0 8 in progress volume rebalance: redhat: success: [root@zod glusterfs]# gluster v rebal redhat status; gluster v tier redhat status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 9 0Bytes 33 24 0 in progress 1701.00 yarrow 10 0Bytes 38 28 0 in progress 1701.00 volume rebalance: redhat: success: Node Promoted files Demoted files Status --------- --------- --------- --------- localhost 9 0 in progress yarrow 0 10 in progress
volume rebalance: redhat: success: [root@zod glusterfs]# gluster v info redhat Volume Name: redhat Type: Tier Volume ID: ec61f03a-b9c6-4a43-8aae-a1a3ca65e234 Status: Started Number of Bricks: 16 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: yarrow:/rhs/brick6/redhat_hot Brick2: zod:/rhs/brick6/redhat_hot Brick3: yarrow:/rhs/brick7/redhat_hot Brick4: zod:/rhs/brick7/redhat_hot Cold Tier: Cold Tier Type : Distributed-Disperse Number of Bricks: 2 x (4 + 2) = 12 Brick5: zod:/rhs/brick1/redhat Brick6: yarrow:/rhs/brick1/redhat Brick7: zod:/rhs/brick2/redhat Brick8: yarrow:/rhs/brick2/redhat Brick9: zod:/rhs/brick3/redhat Brick10: yarrow:/rhs/brick3/redhat Brick11: zod:/rhs/brick4/redhat Brick12: yarrow:/rhs/brick4/redhat Brick13: zod:/rhs/brick5/redhat Brick14: yarrow:/rhs/brick5/redhat Brick15: yarrow:/rhs/brick6/redhat Brick16: zod:/rhs/brick6/redhat Options Reconfigured: cluster.tier-demote-frequency: 30 cluster.tier-promote-frequency: 50 features.ctr-enabled: on performance.io-cache: off performance.quick-read: off performance.readdir-ahead: on
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report. glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
REVIEW: http://review.gluster.org/12884 (tier/tier: Ignoring status of already migrated files) posted (#1) for review on release-3.7 by Joseph Fernandes
REVIEW: http://review.gluster.org/12884 (tier/tier: Ignoring status of already migrated files) posted (#2) for review on release-3.7 by Joseph Fernandes
REVIEW: http://review.gluster.org/12884 (tier/tier: Ignoring status of already migrated files) posted (#3) for review on release-3.7 by Dan Lambright (dlambrig)
COMMIT: http://review.gluster.org/12884 committed in release-3.7 by Dan Lambright (dlambrig) ------ commit ec13d5063763cdee3fe3bb372d6c2bd01734a839 Author: Joseph Fernandes <josferna> Date: Thu Nov 26 12:42:17 2015 +0530 tier/tier: Ignoring status of already migrated files Ignore the status of already migrated files and in the process don't count. Backport http://review.gluster.org/12758 > Change-Id: Idba6402508d51a4285ac96742c6edf797ee51b6a > BUG: 1276141 > Signed-off-by: Joseph Fernandes <josferna> > Reviewed-on: http://review.gluster.org/12758 > Tested-by: Gluster Build System <jenkins.com> > Reviewed-by: Dan Lambright <dlambrig> > Tested-by: Dan Lambright <dlambrig> Signed-off-by: Joseph Fernandes <josferna> Change-Id: I44b7b965ecbc34159c2233af1a74762fd410dcaf BUG: 1262860 Reviewed-on: http://review.gluster.org/12884 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Dan Lambright <dlambrig> Tested-by: Dan Lambright <dlambrig>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report. glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user