Bug 1315583

Summary: [tiering]: gluster v reset of watermark levels can allow low watermark level to have a higher value than hi watermark level
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: krishnaram Karthick <kramdoss>
Component: tierAssignee: hari gowtham <hgowtham>
Status: CLOSED ERRATA QA Contact: Sweta Anandpara <sanandpa>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, asrivast, hgowtham, nbalacha, nchilaka, rhinduja, rhs-bugs, sanandpa
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8.4-21 Doc Type: Bug Fix
Doc Text:
Cause: During the reset of low watermarks we didn't have the check to verify if the low watermark being reset was higher than the high watermark. and vice versa. Consequence: we were able to set low watermark higher than high watermark and hi watermark lower than low watermark. Fix: put to check to avoid this happening. Result: we won't be able to mess up with the wrong values in watermark by the watermark reset.
Story Points: ---
Clone Of:
: 1328342 (view as bug list) Environment:
Last Closed: 2017-09-21 04:25:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1328342, 1417147    

Description krishnaram Karthick 2016-03-08 07:05:05 UTC
Description of problem:

Resetting only one of the watermark level (either low watermark or high watermark) allows system to have a higher low watermark than high water mark.

For ex, when low and hi watermarks are set at 10% and 30% respectively and when low watermark is reset using 'gluster v reset <vol name> cluster.watermark-low' command, watermark levels become 75% and 30% for low and high watermarks respectively.

Resetting watermark levels should always done in pair, i.e., when one of the watermark level is attempted to reset, a warning should be thrown that this will reset both low and hi watermarks and the watermarks should be reset together to default 75% and 90%

Version-Release number of selected component (if applicable):
glusterfs-3.7.5-19.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. create a tiered volume
2. set watermarks level to 10% and 30% 

# gluster v set testvol cluster.watermark-low 10
# gluster v set testvol cluster.watermark-hi 30

3. Reset cluster.watermark-low

# gluster v reset tiervol cluster.watermark-low

Actual results:
# gluster v get testvol all | grep 'cluster.watermark'
cluster.watermark-hi                    10                                      
cluster.watermark-low                   75   

Expected results:

This should either be disallowed or both watermark levels(high and low) should be reset. 

Additional info:

Comment 2 hari gowtham 2016-05-24 09:44:43 UTC
patch on master: http://review.gluster.org/#/c/14028/4

Comment 4 Atin Mukherjee 2017-04-04 06:17:50 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/102301/

Comment 6 Sweta Anandpara 2017-05-03 10:49:44 UTC
Tested and verified this on the build 3.8.4-22

Resetting hi and low watermark values does throw up an error when the low<high watermark rules are not followed. We are however able to set the high and low watermark to the _same_ value, using 'gluster volume set' command. In conversation with tiering devel on what should the expected behaviour be. However that does not fall in the scope of the fix that has gone in, in the present bz.

Logs are pasted below. Moving it to verified in 3.3.0.

[root@dhcp47-165 yum.repos.d]# gluster v info ozone
 
Volume Name: ozone
Type: Tier
Volume ID: 8b736150-4fdd-4f00-9446-4ae89920f63b
Status: Started
Snapshot Count: 0
Number of Bricks: 12
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.47.157:/bricks/brick2/ozone_tier3
Brick2: 10.70.47.162:/bricks/brick2/ozone_tier2
Brick3: 10.70.47.164:/bricks/brick2/ozone_tier1
Brick4: 10.70.47.165:/bricks/brick2/ozone_tier0
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 4 x 2 = 8
Brick5: 10.70.47.165:/bricks/brick0/ozone_0
Brick6: 10.70.47.164:/bricks/brick0/ozone_1
Brick7: 10.70.47.162:/bricks/brick0/ozone_2
Brick8: 10.70.47.157:/bricks/brick0/ozone_3
Brick9: 10.70.47.165:/bricks/brick1/ozone_4
Brick10: 10.70.47.164:/bricks/brick1/ozone_5
Brick11: 10.70.47.162:/bricks/brick1/ozone_6
Brick12: 10.70.47.157:/bricks/brick1/ozone_7
Options Reconfigured:
cluster.watermark-hi: 60
cluster.watermark-low: 60
cluster.tier-mode: cache
features.ctr-enabled: on
features.barrier: enable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
performance.parallel-readdir: on
[root@dhcp47-165 yum.repos.d]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    60                                      
cluster.watermark-low                   60                                      
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone cluster.watermark-low
volume reset: failed: Resetting low-watermark to default will make it higher or equal to the hi-watermark, which is an invalid configuration state. Please raise the hi-watermark first to the desired value and then reset the low-watermark.
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone cluster.watermark-hi
volume reset: success: reset volume successful
[root@dhcp47-165 yum.repos.d]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    90                                      
cluster.watermark-low                   60                                      
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone cluster.watermark-low
volume reset: success: reset volume successful
[root@dhcp47-165 yum.repos.d]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    90                                      
cluster.watermark-low                   75                                      
[root@dhcp47-165 yum.repos.d]# gluster v set ozone cluster.watermark-low 10
volume set: success
[root@dhcp47-165 yum.repos.d]# gluster v set ozone cluster.watermark-hi 10
volume set: success
[root@dhcp47-165 yum.repos.d]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    10                                      
cluster.watermark-low                   10                                      
[root@dhcp47-165 yum.repos.d]# gluster v reset cluster.watermark-low
volume reset: failed: Volume cluster.watermark-low does not exist
[root@dhcp47-165 yum.repos.d]# gluster v reset disp cluster.watermark-low
volume reset: success: reset volume successful
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone  cluster.watermark-low
volume reset: failed: Resetting low-watermark to default will make it higher or equal to the hi-watermark, which is an invalid configuration state. Please raise the hi-watermark first to the desired value and then reset the low-watermark.
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone  cluster.watermark-hi
volume reset: success: reset volume successful
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone  cluster.watermark-low
volume reset: success: reset volume successful
[root@dhcp47-165 yum.repos.d]# gluster v set ozone cluster.watermark-low 95
volume set: failed: lower watermark cannot exceed upper watermark.
[root@dhcp47-165 yum.repos.d]# gluster v set ozone cluster.watermark-hi 65
volume set: failed: lower watermark cannot exceed upper watermark.
[root@dhcp47-165 yum.repos.d]# gluster v set ozone cluster.watermark-hi 95
volume set: success
[root@dhcp47-165 yum.repos.d]# gluster v set ozone cluster.watermark-low 92
gluster v reset ozone cluster.watevolume set: success
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone cluster.watermark-hi
volume reset: failed: Resetting hi-watermark to default will make it lower or equal to the low-watermark, which is an invalid configuration state. Please lower the low-watermark first to the desired value and then reset the hi-watermark.
[root@dhcp47-165 yum.repos.d]# 
[root@dhcp47-165 yum.repos.d]# 
[root@dhcp47-165 yum.repos.d]# gluster v reset cluster.watermark-low
volume reset: failed: Volume cluster.watermark-low does not exist
[root@dhcp47-165 yum.repos.d]# gluster v reset disp cluster.watermark-low
volume reset: success: reset volume successful
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone  cluster.watermark-low
volume reset: success: reset volume successful
[root@dhcp47-165 yum.repos.d]# gluster v reset ozone  cluster.watermark-hi
gluster v get ozone all | grep watermark
volume reset: success: reset volume successful
[root@dhcp47-165 yum.repos.d]# gluster v get ozone all | grep watermark
cluster.watermark-hi                    90                                      
cluster.watermark-low                   75                                      
[root@dhcp47-165 yum.repos.d]#

Comment 8 errata-xmlrpc 2017-09-21 04:25:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Comment 9 errata-xmlrpc 2017-09-21 04:53:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774