Bug 1303895 - promotions not happening when space is created on previously full hot tier
Summary: promotions not happening when space is created on previously full hot tier
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1303894
Blocks: 1306129
TreeView+ depends on / blocked
 
Reported: 2016-02-02 11:11 UTC by Nithya Balachandran
Modified: 2016-06-16 13:56 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1303894
: 1306129 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:56:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nithya Balachandran 2016-02-02 11:11:17 UTC
+++ This bug was initially created as a clone of Bug #1303894 +++

Description of problem:
Tests where space is created on a previously full hot-tier are showing erratic behaviour on promotions. In some runs, promotions are happening as expected. In most cases, no promotions are happening at all.

Version-Release number of selected component (if applicable):
glusterfs*-3.7.5-17.el7.x86_64
kernel: 3.10.0-327.el7.x86_64 (RHEL 7.2)

How reproducible:
Consistently, with the steps below.

Steps to Reproduce:

1. 
create 2x(8+4) base volume (about 15TB capacity); attach 2x2 SAS-SSD as hot tier (about 360GB capacity). fuse mount on a set of clients.

2. 
create directory smf_init in the mount point. create a data set of size 480GB within directory smf-init, of large files each 256MB in size. This fills up the hot tier to the max allowed.

3.
create a directory smf_data in the mount point. create a data set of size 32GB within smf_data, of small files each 64KB. rm -rf <mnt-pt>/smf_init. this deletes all files created in step 1 and creates space within the hot tier.

4. read files in the directory <mnt-pt>/smf_data. time taken for read phase is more than the promote frequency of 120s.

Actual results:
Files read in step 4. are not getting promoted to hot-tier.

Expected results:
Files should get promoted.

Additional info:

Comment 1 Vijay Bellur 2016-02-02 11:17:46 UTC
REVIEW: http://review.gluster.org/13332 (cluster/tier : Reset watermarks in tier) posted (#1) for review on master by N Balachandran (nbalacha)

Comment 2 Nithya Balachandran 2016-02-02 13:45:54 UTC
Analysis:

The tier volume in question had different nodes running the hot and cold bricks. No node contains both hot and cold bricks, so the tier process on each node will either promote or demote, but not both.

The initial create operations caused the hot tier usage to cross the high watermark configured. This was detected by the tier processes running on the cold tier nodes and tier_conf->watermark_last was set to TIER_WM_HI. 

The files on the hot tier were then deleted and the disk space freed up. The promotions are now expected to start again.

However, as the cold tier nodes do not demote files, the tier_check_watermark is not called and the watermark value is not reset. So the tier_check_promote will always fail, in turn preventing tier_watermark_check from being called .

The cold tier nodes will now never promote files and the hot tier will eventually empty out.

Such a configuration is likely to hot this issue frequently as hot tier bricks are usually small and likely to cross the high watermark frequently.

Comment 3 Vijay Bellur 2016-02-02 16:47:23 UTC
REVIEW: http://review.gluster.org/13341 (cluster/tier : Reset watermarks in tier) posted (#1) for review on master by N Balachandran (nbalacha)

Comment 4 Vijay Bellur 2016-02-03 07:53:22 UTC
REVIEW: http://review.gluster.org/13341 (cluster/tier : Reset watermarks in tier) posted (#2) for review on master by N Balachandran (nbalacha)

Comment 5 Vijay Bellur 2016-02-03 17:59:29 UTC
COMMIT: http://review.gluster.org/13341 committed in master by Dan Lambright (dlambrig) 
------
commit 545f4ed2c7195a21210e6a055c27c1b7a115e18c
Author: N Balachandran <nbalacha>
Date:   Tue Feb 2 22:09:45 2016 +0530

    cluster/tier : Reset watermarks in tier
    
    A node which contains only cold bricks and has detected that
    the high watermark value has been breached on the hot tier will
    never reset the watermark to the correct value. The promotion check
    will thus always fail and no promotions will occur from that node.
    
    Change-Id: I0f0804744cd184c263acbea1ee50cd6010a49ec5
    BUG: 1303895
    Signed-off-by: N Balachandran <nbalacha>
    Reviewed-on: http://review.gluster.org/13341
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Dan Lambright <dlambrig>

Comment 6 Vijay Bellur 2016-02-04 13:01:58 UTC
REVIEW: http://review.gluster.org/13357 (cluster/tier : Fixed wrong variable comparison) posted (#1) for review on master by N Balachandran (nbalacha)

Comment 7 Vijay Bellur 2016-02-05 05:36:44 UTC
REVIEW: http://review.gluster.org/13357 (cluster/tier : Fixed wrong variable comparison) posted (#2) for review on master by N Balachandran (nbalacha)

Comment 8 Vijay Bellur 2016-02-11 06:25:46 UTC
COMMIT: http://review.gluster.org/13357 committed in master by Dan Lambright (dlambrig) 
------
commit 444378de64f398c4e19468e83ac31fccc0a94800
Author: N Balachandran <nbalacha>
Date:   Thu Feb 4 18:24:55 2016 +0530

    cluster/tier : Fixed wrong variable comparison
    
    The wrong variable was being checked to determine
    the watermark value.
    
    Change-Id: If4c97fa70b772187f1fcbdf5193e077cb356a8b1
    BUG: 1303895
    Signed-off-by: N Balachandran <nbalacha>
    Reviewed-on: http://review.gluster.org/13357
    Smoke: Gluster Build System <jenkins.com>
    Reviewed-by: Dan Lambright <dlambrig>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 9 Niels de Vos 2016-06-16 13:56:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.