Bug 1447960 - [Tiering]: High and low watermark values when set to the same level, is allowed
Summary: [Tiering]: High and low watermark values when set to the same level, is allowed
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: hari gowtham
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1448790 1454597
TreeView+ depends on / blocked
 
Reported: 2017-05-04 10:18 UTC by hari gowtham
Modified: 2017-09-05 17:28 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.12.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1447929
: 1448790 1454597 (view as bug list)
Environment:
Last Closed: 2017-09-05 17:28:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description hari gowtham 2017-05-04 10:18:49 UTC
+++ This bug was initially created as a clone of Bug #1447929 +++

Description of problem:
======================
In a tiered volume in cache mode, high and low watermark values govern the promotions and demotions that take place between cold and hot tiers. The default values are 75 and 90 for low and high respectively. 

The check is presently done if an attempt is made to modify the low-watermark value to something that is higher than high-watermark value. It errors out, as expected. However, when an attempt is made to set it to the exact same value, it is allowed. 
For e.g., if I use 'gluster volume set' command to change 'cluster.watermark-hi' option to 75, it will succeed. This will result in high and low watermark values both set to 75. 

I am not sure if this is intended, or if there is a use-case for the same. The current behaviour doesn't seem to be harming the functionality (at the outset), but it is something that is unknown from testing front.

Version-Release number of selected component (if applicable):
=========================================================
3.8.4-22

How reproducible:
=================
Always


Steps to Reproduce:
==================
1. Create a volume, say 'vol1' 2*2
2. Attach a tier to vol1, of 2*2
3. Note down the default low and high watermark values, by using the command 'gluster volume get vol1 all | grep watermark'
4. Change the high watermark to the defaulted value of low watermark, using the command 'gluster volume set vol1 cluster.watermark-hi 75'


Actual results:
==============
Step 4 succeeds.


Expected results:
================
Step4 should fail, and it should error out saying 'Setting high watermark lesser than OR equal to low watermark is not allowed'


Additional info:
==================

[root@dhcp47-165 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Tier
Volume ID: 8b736150-4fdd-4f00-9446-4ae89920f63b
Status: Started
Snapshot Count: 0
Number of Bricks: 12
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.47.157:/bricks/brick2/ozone_tier3
Brick2: 10.70.47.162:/bricks/brick2/ozone_tier2
Brick3: 10.70.47.164:/bricks/brick2/ozone_tier1
Brick4: 10.70.47.165:/bricks/brick2/ozone_tier0
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 4 x 2 = 8
Brick5: 10.70.47.165:/bricks/brick0/ozone_0
Brick6: 10.70.47.164:/bricks/brick0/ozone_1
Brick7: 10.70.47.162:/bricks/brick0/ozone_2
Brick8: 10.70.47.157:/bricks/brick0/ozone_3
Brick9: 10.70.47.165:/bricks/brick1/ozone_4
Brick10: 10.70.47.164:/bricks/brick1/ozone_5
Brick11: 10.70.47.162:/bricks/brick1/ozone_6
Brick12: 10.70.47.157:/bricks/brick1/ozone_7
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.barrier: enable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
performance.parallel-readdir: on
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    90                                      
cluster.watermark-low                   75                                      
[root@dhcp47-165 ~]# gluster v set ozone cluster.watermark-hi 75
volume set: success
[root@dhcp47-165 ~]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    75                                      
cluster.watermark-low                   75                                      
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# gluster v reset ozone cluster.watermark-hi
volume reset: success: reset volume successful
[root@dhcp47-165 ~]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    90                                      
cluster.watermark-low                   75                                      
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# rpm -qa | grep gluster
glusterfs-libs-3.8.4-22.el7rhgs.x86_64
glusterfs-cli-3.8.4-22.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-22.el7rhgs.x86_64
glusterfs-rdma-3.8.4-22.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.1.el7rhgs.noarch
glusterfs-3.8.4-22.el7rhgs.x86_64
glusterfs-api-3.8.4-22.el7rhgs.x86_64
glusterfs-events-3.8.4-22.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-fuse-3.8.4-22.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-22.el7rhgs.x86_64
glusterfs-server-3.8.4-22.el7rhgs.x86_64
python-gluster-3.8.4-22.el7rhgs.noarch
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# gluster pool ist
unrecognized word: ist (position 1)
[root@dhcp47-165 ~]# gluster pool list
UUID					Hostname                         	State
afa697a0-2cc6-4705-892e-f5ec56a9f9de	dhcp47-164.lab.eng.blr.redhat.com	Connected 
95491d39-d83a-4053-b1d5-682ca7290bd2	dhcp47-162.lab.eng.blr.redhat.com	Connected 
d0955c85-94d0-41ba-aea8-1ffde3575ea5	dhcp47-157.lab.eng.blr.redhat.com	Connected 
834d66eb-fb65-4ea3-949a-e7cb4c198f2b	localhost                        	Connected 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]#

--- Additional comment from Red Hat Bugzilla Rules Engine on 2017-05-04 05:13:33 EDT ---

This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.3.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

Comment 1 Worker Ant 2017-05-04 10:23:20 UTC
REVIEW: https://review.gluster.org/17175 (Tier: Watermark check for hi and low value being equal) posted (#1) for review on master by hari gowtham (hari.gowtham005)

Comment 2 Worker Ant 2017-05-08 03:58:21 UTC
COMMIT: https://review.gluster.org/17175 committed in master by Atin Mukherjee (amukherj) 
------
commit 2502162502009d4be75e67e49d71f3f38aaa7595
Author: hari gowtham <hgowtham>
Date:   Thu May 4 15:49:59 2017 +0530

    Tier: Watermark check for hi and low value being equal
    
    Problem: Both low and hi watermark can be set to same value
    as the check missed the case for being equal.
    
    Fix: Add the check to both the hi and low values being equal
    along with the low value being higher than hi value.
    
    Change-Id: Ia235163aeefdcb2a059e2e58a5cfd8fb7f1a4c64
    BUG: 1447960
    Signed-off-by: hari gowtham <hgowtham>
    Reviewed-on: https://review.gluster.org/17175
    Smoke: Gluster Build System <jenkins.org>
    Tested-by: hari gowtham <hari.gowtham005>
    Reviewed-by: Atin Mukherjee <amukherj>
    Reviewed-by: Milind Changire <mchangir>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 3 Shyamsundar 2017-09-05 17:28:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report.

glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.