Bug 1239042

Summary: disperse: Wrong values for "cluster.heal-timeout" could be assigned using CLI
Product: [Community] GlusterFS Reporter: Ashish Pandey <aspandey>
Component: disperseAssignee: Ashish Pandey <aspandey>
Status: CLOSED EOL QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.7.2CC: aspandey, bugs, mzywusko, pkarampu
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1239037 Environment:
Last Closed: 2017-03-08 11:03:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1239037, 1239043    
Bug Blocks:    

Description Ashish Pandey 2015-07-03 10:53:12 UTC
+++ This bug was initially created as a clone of Bug #1239037 +++

Description of problem:
Wrong Values for "cluster.heal-timeout" option could be set.
gluster v set cluster.heal-timeout <value> is taking any value. Very large number, 0 or even negative.
Don't see any wrong behavior for volume but gluster v info shows wrong values are set for this option. 


[root@rhs3 ~]# gluster v info test
 
Volume Name: test
Type: Distributed-Disperse
Volume ID: 40d3a925-284b-47dc-9480-9a0355034d16
Status: Started
Number of Bricks: 2 x (8 + 4) = 24
Transport-type: tcp
Bricks:
Brick1: 10.70.43.118:/brick/test/b1
Brick2: 10.70.43.118:/brick/test/b2
Brick3: 10.70.43.118:/brick/test/b3
Brick4: 10.70.43.118:/brick/test/b4
Brick5: 10.70.43.118:/brick/test/b5
Brick6: 10.70.43.118:/brick/test/b6
Brick7: 10.70.42.64:/brick/test/b7
Brick8: 10.70.42.64:/brick/test/b8
Brick9: 10.70.42.64:/brick/test/b9
Brick10: 10.70.42.64:/brick/test/b10
Brick11: 10.70.42.64:/brick/test/b11
Brick12: 10.70.42.64:/brick/test/b12
Brick13: 10.70.43.118:/brick/test/b11
Brick14: 10.70.43.118:/brick/test/b12
Brick15: 10.70.43.118:/brick/test/b13
Brick16: 10.70.43.118:/brick/test/b14
Brick17: 10.70.43.118:/brick/test/b15
Brick18: 10.70.43.118:/brick/test/b16
Brick19: 10.70.42.64:/brick/test/b17
Brick20: 10.70.42.64:/brick/test/b18
Brick21: 10.70.42.64:/brick/test/b19
Brick22: 10.70.42.64:/brick/test/b20
Brick23: 10.70.42.64:/brick/test/b21
Brick24: 10.70.42.64:/brick/test/b22
Options Reconfigured:
cluster.heal-timeout: -12226666666666666666666666666666666666666666666666666666666666666666666666666666
server.event-threads: 2
client.event-threads: 2
features.uss: on
cluster.disperse-self-heal-daemon: enable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
[root@rhs3 ~]# 


Version-Release number of selected component (if applicable):
[root@rhs3 ~]# gluster --version
glusterfs 3.7.1 built on Jun 28 2015 11:01:17
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.



How reproducible:
100%

Steps to Reproduce:
1. Create disperse volume.
2. set cluster.heal-timeout for any unexpected value (large, 0 or negative number)
[root@rhs3 ~]# gluster v start test
volume start: test: success
[root@rhs3 ~]# gluster volume set test cluster.heal-timeout -1222666666666666666666
volume set: success

3. run gluster v info test. It displayes wrong values given on cli

Actual results:
CLI is accepting and setting wrong values for cluster.heal-timeout

Expected results:

There should be a check on the given values. A proper message should be displayed for wrong values provided.

There 


Additional info:

Comment 1 Kaushal 2017-03-08 11:03:57 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.