Bug 1447929 - [Tiering]: High and low watermark values when set to the same level, is allowed
Summary: [Tiering]: High and low watermark values when set to the same level, is allowed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: RHGS 3.3.0
Assignee: hari gowtham
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On: 1448790
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-05-04 09:13 UTC by Sweta Anandpara
Modified: 2017-09-21 04:41 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.4-25
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1447960 (view as bug list)
Environment:
Last Closed: 2017-09-21 04:41:45 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Description Sweta Anandpara 2017-05-04 09:13:30 UTC
Description of problem:
======================
In a tiered volume in cache mode, high and low watermark values govern the promotions and demotions that take place between cold and hot tiers. The default values are 75 and 90 for low and high respectively. 

The check is presently done if an attempt is made to modify the low-watermark value to something that is higher than high-watermark value. It errors out, as expected. However, when an attempt is made to set it to the exact same value, it is allowed. 
For e.g., if I use 'gluster volume set' command to change 'cluster.watermark-hi' option to 75, it will succeed. This will result in high and low watermark values both set to 75. 

I am not sure if this is intended, or if there is a use-case for the same. The current behaviour doesn't seem to be harming the functionality (at the outset), but it is something that is unknown from testing front.

Version-Release number of selected component (if applicable):
=========================================================
3.8.4-22

How reproducible:
=================
Always


Steps to Reproduce:
==================
1. Create a volume, say 'vol1' 2*2
2. Attach a tier to vol1, of 2*2
3. Note down the default low and high watermark values, by using the command 'gluster volume get vol1 all | grep watermark'
4. Change the high watermark to the defaulted value of low watermark, using the command 'gluster volume set vol1 cluster.watermark-hi 75'


Actual results:
==============
Step 4 succeeds.


Expected results:
================
Step4 should fail, and it should error out saying 'Setting high watermark lesser than OR equal to low watermark is not allowed'


Additional info:
==================

[root@dhcp47-165 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Tier
Volume ID: 8b736150-4fdd-4f00-9446-4ae89920f63b
Status: Started
Snapshot Count: 0
Number of Bricks: 12
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.47.157:/bricks/brick2/ozone_tier3
Brick2: 10.70.47.162:/bricks/brick2/ozone_tier2
Brick3: 10.70.47.164:/bricks/brick2/ozone_tier1
Brick4: 10.70.47.165:/bricks/brick2/ozone_tier0
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 4 x 2 = 8
Brick5: 10.70.47.165:/bricks/brick0/ozone_0
Brick6: 10.70.47.164:/bricks/brick0/ozone_1
Brick7: 10.70.47.162:/bricks/brick0/ozone_2
Brick8: 10.70.47.157:/bricks/brick0/ozone_3
Brick9: 10.70.47.165:/bricks/brick1/ozone_4
Brick10: 10.70.47.164:/bricks/brick1/ozone_5
Brick11: 10.70.47.162:/bricks/brick1/ozone_6
Brick12: 10.70.47.157:/bricks/brick1/ozone_7
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.barrier: enable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
performance.parallel-readdir: on
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    90                                      
cluster.watermark-low                   75                                      
[root@dhcp47-165 ~]# gluster v set ozone cluster.watermark-hi 75
volume set: success
[root@dhcp47-165 ~]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    75                                      
cluster.watermark-low                   75                                      
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# gluster v reset ozone cluster.watermark-hi
volume reset: success: reset volume successful
[root@dhcp47-165 ~]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    90                                      
cluster.watermark-low                   75                                      
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# rpm -qa | grep gluster
glusterfs-libs-3.8.4-22.el7rhgs.x86_64
glusterfs-cli-3.8.4-22.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-22.el7rhgs.x86_64
glusterfs-rdma-3.8.4-22.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.1.el7rhgs.noarch
glusterfs-3.8.4-22.el7rhgs.x86_64
glusterfs-api-3.8.4-22.el7rhgs.x86_64
glusterfs-events-3.8.4-22.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-fuse-3.8.4-22.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-22.el7rhgs.x86_64
glusterfs-server-3.8.4-22.el7rhgs.x86_64
python-gluster-3.8.4-22.el7rhgs.noarch
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# gluster pool ist
unrecognized word: ist (position 1)
[root@dhcp47-165 ~]# gluster pool list
UUID					Hostname                         	State
afa697a0-2cc6-4705-892e-f5ec56a9f9de	dhcp47-164.lab.eng.blr.redhat.com	Connected 
95491d39-d83a-4053-b1d5-682ca7290bd2	dhcp47-162.lab.eng.blr.redhat.com	Connected 
d0955c85-94d0-41ba-aea8-1ffde3575ea5	dhcp47-157.lab.eng.blr.redhat.com	Connected 
834d66eb-fb65-4ea3-949a-e7cb4c198f2b	localhost                        	Connected 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]# 
[root@dhcp47-165 ~]#

Comment 2 Atin Mukherjee 2017-05-04 10:42:56 UTC
upstream patch : https://review.gluster.org/#/c/17175/

Comment 5 Atin Mukherjee 2017-05-09 14:59:17 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/105658/

Comment 7 Sweta Anandpara 2017-05-16 10:33:51 UTC
Tested and verified this on the build 3.8.4-25

High watermark MUST be set to an integer greater than lower watermark. Setting it to the same values does error out, when tested in various situations.

Moving this BZ to verified in 3.3. Detailed logs are pasted below. 

[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# rpm -qa | grep gluster
glusterfs-libs-3.8.4-25.el7rhgs.x86_64
glusterfs-events-3.8.4-25.el7rhgs.x86_64
glusterfs-cli-3.8.4-25.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-25.el7rhgs.x86_64
glusterfs-server-3.8.4-25.el7rhgs.x86_64
glusterfs-rdma-3.8.4-25.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.1.el7rhgs.noarch
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-api-3.8.4-25.el7rhgs.x86_64
python-gluster-3.8.4-25.el7rhgs.noarch
glusterfs-debuginfo-3.8.4-24.el7rhgs.x86_64
glusterfs-fuse-3.8.4-25.el7rhgs.x86_64
glusterfs-3.8.4-25.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-25.el7rhgs.x86_64
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster peer status
Number of Peers: 5

Hostname: dhcp47-113.lab.eng.blr.redhat.com
Uuid: a0557927-4e5e-4ff7-8dce-94873f867707
State: Peer in Cluster (Connected)

Hostname: dhcp47-114.lab.eng.blr.redhat.com
Uuid: c0dac197-5a4d-4db7-b709-dbf8b8eb0896
State: Peer in Cluster (Connected)

Hostname: dhcp47-115.lab.eng.blr.redhat.com
Uuid: f828fdfa-e08f-4d12-85d8-2121cafcf9d0
State: Peer in Cluster (Connected)

Hostname: dhcp47-116.lab.eng.blr.redhat.com
Uuid: a96e0244-b5ce-4518-895c-8eb453c71ded
State: Peer in Cluster (Connected)

Hostname: dhcp47-117.lab.eng.blr.redhat.com
Uuid: 17eb3cef-17e7-4249-954b-fc19ec608304
State: Peer in Cluster (Connected)
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster v list
disp
disp2
dist
distrep
distrep2
distrep3
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster v info disp
 
Volume Name: disp
Type: Tier
Volume ID: ca8ba15e-1c0e-463c-b041-76bca48b0330
Status: Started
Snapshot Count: 0
Number of Bricks: 10
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 4
Brick1: 10.70.47.115:/bricks/brick4/disp_tier3
Brick2: 10.70.47.114:/bricks/brick4/disp_tier2
Brick3: 10.70.47.113:/bricks/brick4/disp_tier1
Brick4: 10.70.47.121:/bricks/brick4/disp_tier0
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick5: 10.70.47.121:/bricks/brick3/disp_0
Brick6: 10.70.47.113:/bricks/brick3/disp_1
Brick7: 10.70.47.114:/bricks/brick3/disp_2
Brick8: 10.70.47.115:/bricks/brick3/disp_3
Brick9: 10.70.47.116:/bricks/brick3/disp_4
Brick10: 10.70.47.117:/bricks/brick3/disp_5
Options Reconfigured:
cluster.watermark-low: 75
cluster.watermark-hi: 76
nfs.disable: on
transport.address-family: inet
features.bitrot: on
features.scrub: Active
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on
features.scrub-freq: hourly
performance.stat-prefetch: on
features.ctr-enabled: on
cluster.tier-mode: cache
cluster.brick-multiplex: disable
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster  v status disp
Status of volume: disp
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.47.115:/bricks/brick4/disp_tier
3                                           49152     0          Y       23451
Brick 10.70.47.114:/bricks/brick4/disp_tier
2                                           49152     0          Y       30898
Brick 10.70.47.113:/bricks/brick4/disp_tier
1                                           49152     0          Y       5934 
Brick 10.70.47.121:/bricks/brick4/disp_tier
0                                           49152     0          Y       5536 
Cold Bricks:
Brick 10.70.47.121:/bricks/brick3/disp_0    49153     0          Y       5539 
Brick 10.70.47.113:/bricks/brick3/disp_1    49153     0          Y       5940 
Brick 10.70.47.114:/bricks/brick3/disp_2    49153     0          Y       30904
Brick 10.70.47.115:/bricks/brick3/disp_3    49153     0          Y       23457
Brick 10.70.47.116:/bricks/brick3/disp_4    49152     0          Y       27748
Brick 10.70.47.117:/bricks/brick3/disp_5    49152     0          Y       9724 
Self-heal Daemon on localhost               N/A       N/A        Y       7034 
Quota Daemon on localhost                   N/A       N/A        Y       7050 
Bitrot Daemon on localhost                  N/A       N/A        Y       7103 
Scrubber Daemon on localhost                N/A       N/A        Y       7156 
Self-heal Daemon on dhcp47-113.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       5850 
Quota Daemon on dhcp47-113.lab.eng.blr.redh
at.com                                      N/A       N/A        Y       5859 
Bitrot Daemon on dhcp47-113.lab.eng.blr.red
hat.com                                     N/A       N/A        Y       5876 
Scrubber Daemon on dhcp47-113.lab.eng.blr.r
edhat.com                                   N/A       N/A        Y       5887 
Self-heal Daemon on dhcp47-115.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       26165
Quota Daemon on dhcp47-115.lab.eng.blr.redh
at.com                                      N/A       N/A        Y       26197
Bitrot Daemon on dhcp47-115.lab.eng.blr.red
hat.com                                     N/A       N/A        Y       26219
Scrubber Daemon on dhcp47-115.lab.eng.blr.r
edhat.com                                   N/A       N/A        Y       26235
Self-heal Daemon on dhcp47-116.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       26425
Quota Daemon on dhcp47-116.lab.eng.blr.redh
at.com                                      N/A       N/A        Y       26459
Bitrot Daemon on dhcp47-116.lab.eng.blr.red
hat.com                                     N/A       N/A        Y       26473
Scrubber Daemon on dhcp47-116.lab.eng.blr.r
edhat.com                                   N/A       N/A        Y       26510
Self-heal Daemon on dhcp47-114.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       30761
Quota Daemon on dhcp47-114.lab.eng.blr.redh
at.com                                      N/A       N/A        Y       30795
Bitrot Daemon on dhcp47-114.lab.eng.blr.red
hat.com                                     N/A       N/A        Y       30815
Scrubber Daemon on dhcp47-114.lab.eng.blr.r
edhat.com                                   N/A       N/A        Y       30831
Self-heal Daemon on dhcp47-117.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       7860 
Quota Daemon on dhcp47-117.lab.eng.blr.redh
at.com                                      N/A       N/A        Y       7883 
Bitrot Daemon on dhcp47-117.lab.eng.blr.red
hat.com                                     N/A       N/A        Y       7913 
Scrubber Daemon on dhcp47-117.lab.eng.blr.r
edhat.com                                   N/A       N/A        Y       7964 
 
Task Status of Volume disp
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : 31a36238-7edc-46d5-8eea-3bf8d25a2599
Status               : in progress         
 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster  v disp get all | grep watermark
[root@dhcp47-121 ~]# gluster v get disp all | grep watermark
cluster.watermark-hi                    75                                      
cluster.watermark-low                   75                                      
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster  v set disp cluster.watermark-hi 90
volume set: success
[root@dhcp47-121 ~]# gluster v get disp all | grep watermark
cluster.watermark-hi                    90                                      
cluster.watermark-low                   75                                      
[root@dhcp47-121 ~]# gluster  v set disp cluster.watermark-hi 75
volume set: failed: lower watermark cannot be equal or exceed upper watermark.
[root@dhcp47-121 ~]# gluster  v set disp cluster.watermark-hi 74
volume set: failed: lower watermark cannot be equal or exceed upper watermark.
[root@dhcp47-121 ~]# gluster  v set disp cluster.watermark-hi 76
volume set: success
[root@dhcp47-121 ~]# gluster v get disp all | grep watermark
cluster.watermark-hi                    76                                      
cluster.watermark-low                   75                                      
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster v set disp watermark-low 77
volume set: failed: lower watermark cannot be equal or exceed upper watermark.
[root@dhcp47-121 ~]# gluster v set disp watermark-low 7
volume set: success
[root@dhcp47-121 ~]# gluster v get disp all | grep watermark
cluster.watermark-hi                    76                                      
cluster.watermark-low                   7                                       
[root@dhcp47-121 ~]# gluster v set disp watermark-low 90
volume set: failed: lower watermark cannot be equal or exceed upper watermark.
[root@dhcp47-121 ~]# gluster v set disp watermark-low 76
volume set: failed: lower watermark cannot be equal or exceed upper watermark.
[root@dhcp47-121 ~]# gluster v set disp watermark-low 75
volume set: success
[root@dhcp47-121 ~]# gluster v set disp watermark-low 75.5
volume set: failed: 75.5 is not a compatible value. watermark-low expects an integer value.
[root@dhcp47-121 ~]# gluster v get disp all | grep watermark
cluster.watermark-hi                    76                                      
cluster.watermark-low                   75                                      
[root@dhcp47-121 ~]# gluster v set disp watermark-low abc
volume set: failed: abc is not a compatible value. watermark-low expects an integer value.
[root@dhcp47-121 ~]# gluster v set disp watermark-low 0
volume set: failed: 0 is not a compatible value. watermark-low expects a percentage from 1-99.
[root@dhcp47-121 ~]# gluster v set disp watermark-hi 100
volume set: failed: 100 is not a compatible value. watermark-hi expects a percentage from 1-99.
[root@dhcp47-121 ~]# gluster v set disp watermark-hi 99.9
volume set: failed: 99.9 is not a compatible value. watermark-hi expects an integer value.
[root@dhcp47-121 ~]# gluster v set disp watermark-hi 98.
volume set: failed: 98. is not a compatible value. watermark-hi expects an integer value.
[root@dhcp47-121 ~]# gluster v set disp watermark-hi 0.9
volume set: failed: 0.9 is not a compatible value. watermark-hi expects an integer value.
[root@dhcp47-121 ~]# gluster v set disp watermark-hi 0x90
volume set: failed: 0x90 is not a compatible value. watermark-hi expects a percentage from 1-99.
[root@dhcp47-121 ~]# gluster v set disp watermark-hi ##
Usage: volume set <VOLNAME> <KEY> <VALUE>
[root@dhcp47-121 ~]# gluster v set disp watermark-hi &*
[1] 10284
-bash: 3.3-packages: command not found
[root@dhcp47-121 ~]# Usage: volume set <VOLNAME> <KEY> <VALUE>

[1]+  Exit 1                  gluster v set disp watermark-hi
[root@dhcp47-121 ~]# gluster v set disp watermark-hi ()
-bash: syntax error near unexpected token `('
[root@dhcp47-121 ~]# gluster v set disp watermark-hi @!
volume set: failed: @! is not a compatible value. watermark-hi expects an integer value.
[root@dhcp47-121 ~]# gluster v get disp all | grep watermark
cluster.watermark-hi                    76                                      
cluster.watermark-low                   75                                      
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster v set disp watermark-hi 75
volume set: failed: lower watermark cannot be equal or exceed upper watermark.
[root@dhcp47-121 ~]#

Comment 9 errata-xmlrpc 2017-09-21 04:41:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.