Bug 1306129 - promotions not happening when space is created on previously full hot tier
promotions not happening when space is created on previously full hot tier
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
3.7.7
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: bugs@gluster.org
bugs@gluster.org
: Triaged
Depends On: 1303894 1303895
Blocks: glusterfs-3.7.9
  Show dependency treegraph
 
Reported: 2016-02-10 00:50 EST by Nithya Balachandran
Modified: 2016-04-19 03:24 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.7.9
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1303895
Environment:
Last Closed: 2016-04-19 03:23:04 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Nithya Balachandran 2016-02-10 00:50:52 EST
+++ This bug was initially created as a clone of Bug #1303895 +++

+++ This bug was initially created as a clone of Bug #1303894 +++

Description of problem:
Tests where space is created on a previously full hot-tier are showing erratic behaviour on promotions. In some runs, promotions are happening as expected. In most cases, no promotions are happening at all.

Version-Release number of selected component (if applicable):
glusterfs*-3.7.5-17.el7.x86_64
kernel: 3.10.0-327.el7.x86_64 (RHEL 7.2)

How reproducible:
Consistently, with the steps below.

Steps to Reproduce:

1. 
create 2x(8+4) base volume (about 15TB capacity); attach 2x2 SAS-SSD as hot tier (about 360GB capacity). fuse mount on a set of clients.

2. 
create directory smf_init in the mount point. create a data set of size 480GB within directory smf-init, of large files each 256MB in size. This fills up the hot tier to the max allowed.

3.
create a directory smf_data in the mount point. create a data set of size 32GB within smf_data, of small files each 64KB. rm -rf <mnt-pt>/smf_init. this deletes all files created in step 1 and creates space within the hot tier.

4. read files in the directory <mnt-pt>/smf_data. time taken for read phase is more than the promote frequency of 120s.

Actual results:
Files read in step 4. are not getting promoted to hot-tier.

Expected results:
Files should get promoted.

Additional info:

--- Additional comment from Vijay Bellur on 2016-02-02 06:17:46 EST ---

REVIEW: http://review.gluster.org/13332 (cluster/tier : Reset watermarks in tier) posted (#1) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Nithya Balachandran on 2016-02-02 08:45:54 EST ---

Analysis:

The tier volume in question had different nodes running the hot and cold bricks. No node contains both hot and cold bricks, so the tier process on each node will either promote or demote, but not both.

The initial create operations caused the hot tier usage to cross the high watermark configured. This was detected by the tier processes running on the cold tier nodes and tier_conf->watermark_last was set to TIER_WM_HI. 

The files on the hot tier were then deleted and the disk space freed up. The promotions are now expected to start again.

However, as the cold tier nodes do not demote files, the tier_check_watermark is not called and the watermark value is not reset. So the tier_check_promote will always fail, in turn preventing tier_watermark_check from being called .

The cold tier nodes will now never promote files and the hot tier will eventually empty out.

Such a configuration is likely to hot this issue frequently as hot tier bricks are usually small and likely to cross the high watermark frequently.

--- Additional comment from Vijay Bellur on 2016-02-02 11:47:23 EST ---

REVIEW: http://review.gluster.org/13341 (cluster/tier : Reset watermarks in tier) posted (#1) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Vijay Bellur on 2016-02-03 02:53:22 EST ---

REVIEW: http://review.gluster.org/13341 (cluster/tier : Reset watermarks in tier) posted (#2) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Vijay Bellur on 2016-02-03 12:59:29 EST ---

COMMIT: http://review.gluster.org/13341 committed in master by Dan Lambright (dlambrig@redhat.com) 
------
commit 545f4ed2c7195a21210e6a055c27c1b7a115e18c
Author: N Balachandran <nbalacha@redhat.com>
Date:   Tue Feb 2 22:09:45 2016 +0530

    cluster/tier : Reset watermarks in tier
    
    A node which contains only cold bricks and has detected that
    the high watermark value has been breached on the hot tier will
    never reset the watermark to the correct value. The promotion check
    will thus always fail and no promotions will occur from that node.
    
    Change-Id: I0f0804744cd184c263acbea1ee50cd6010a49ec5
    BUG: 1303895
    Signed-off-by: N Balachandran <nbalacha@redhat.com>
    Reviewed-on: http://review.gluster.org/13341
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Dan Lambright <dlambrig@redhat.com>

--- Additional comment from Vijay Bellur on 2016-02-04 08:01:58 EST ---

REVIEW: http://review.gluster.org/13357 (cluster/tier : Fixed wrong variable comparison) posted (#1) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Vijay Bellur on 2016-02-05 00:36:44 EST ---

REVIEW: http://review.gluster.org/13357 (cluster/tier : Fixed wrong variable comparison) posted (#2) for review on master by N Balachandran (nbalacha@redhat.com)
Comment 1 Vijay Bellur 2016-02-10 00:52:05 EST
REVIEW: http://review.gluster.org/13411 (cluster/tier : Reset watermarks in tier) posted (#1) for review on release-3.7 by N Balachandran (nbalacha@redhat.com)
Comment 2 Vijay Bellur 2016-02-19 10:28:41 EST
COMMIT: http://review.gluster.org/13411 committed in release-3.7 by Dan Lambright (dlambrig@redhat.com) 
------
commit 8856c9f475bc8cf0581d56227497f10eb5ddb0be
Author: N Balachandran <nbalacha@redhat.com>
Date:   Wed Feb 10 10:58:11 2016 +0530

    cluster/tier : Reset watermarks in tier
    
    A node which contains only cold bricks and has detected that
    the high watermark value has been breached on the hot tier will
    never reset the watermark to the correct value. The promotion check
    will thus always fail and no promotions will occur from that node.
    
    > Change-Id: I0f0804744cd184c263acbea1ee50cd6010a49ec5
    > BUG: 1303895
    > Signed-off-by: N Balachandran <nbalacha@redhat.com>
    > Reviewed-on: http://review.gluster.org/13341
    > Smoke: Gluster Build System <jenkins@build.gluster.com>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    > Reviewed-by: Dan Lambright <dlambrig@redhat.com>
    (cherry picked from commit 545f4ed2c7195a21210e6a055c27c1b7a115e18c)
    
    Change-Id: Iba3aa9c57cf5828ab87140c2c8257146a8772836
    BUG: 1306129
    Signed-off-by: N Balachandran <nbalacha@redhat.com>
    Reviewed-on: http://review.gluster.org/13411
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Comment 3 Vijay Bellur 2016-02-25 02:19:30 EST
REVIEW: http://review.gluster.org/13516 (cluster/tier : Fixed wrong variable comparison) posted (#1) for review on release-3.7 by N Balachandran (nbalacha@redhat.com)
Comment 4 Vijay Bellur 2016-02-27 13:48:17 EST
COMMIT: http://review.gluster.org/13516 committed in release-3.7 by Dan Lambright (dlambrig@redhat.com) 
------
commit 1dce0ff0c34b86da04862b1efe0221960e6911a8
Author: N Balachandran <nbalacha@redhat.com>
Date:   Thu Feb 25 12:44:24 2016 +0530

    cluster/tier : Fixed wrong variable comparison
    
    The wrong variable was being checked to determine
    the watermark value.
    
    > Change-Id: If4c97fa70b772187f1fcbdf5193e077cb356a8b1
    > BUG: 1303895
    > Signed-off-by: N Balachandran <nbalacha@redhat.com>
    > Reviewed-on: http://review.gluster.org/13357
    > Smoke: Gluster Build System <jenkins@build.gluster.com>
    > Reviewed-by: Dan Lambright <dlambrig@redhat.com>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Signed-off-by: N Balachandran <nbalacha@redhat.com>
    
    Change-Id: I0a98e0efbc093a727912107038477239e6d85765
    BUG: 1306129
    Reviewed-on: http://review.gluster.org/13516
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: mohammed rafi  kc <rkavunga@redhat.com>
    Tested-by: mohammed rafi  kc <rkavunga@redhat.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Dan Lambright <dlambrig@redhat.com>
Comment 5 Kaushal 2016-04-19 03:23:04 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
Comment 6 Kaushal 2016-04-19 03:24:46 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.