Bug 1239037 - disperse: Wrong values for "cluster.heal-timeout" could be assigned using CLI
Summary: disperse: Wrong values for "cluster.heal-timeout" could be assigned using CLI
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ashish Pandey
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1239042 1239043
TreeView+ depends on / blocked
 
Reported: 2015-07-03 10:37 UTC by Ashish Pandey
Modified: 2016-06-16 13:19 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of:
: 1239042 1239043 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:19:46 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ashish Pandey 2015-07-03 10:37:38 UTC
Description of problem:
Wrong Values for "cluster.heal-timeout" option could be set.
gluster v set cluster.heal-timeout <value> is taking any value. Very large number, 0 or even negative.
Don't see any wrong behavior for volume but gluster v info shows wrong values are set for this option. 


[root@rhs3 ~]# gluster v info test
 
Volume Name: test
Type: Distributed-Disperse
Volume ID: 40d3a925-284b-47dc-9480-9a0355034d16
Status: Started
Number of Bricks: 2 x (8 + 4) = 24
Transport-type: tcp
Bricks:
Brick1: 10.70.43.118:/brick/test/b1
Brick2: 10.70.43.118:/brick/test/b2
Brick3: 10.70.43.118:/brick/test/b3
Brick4: 10.70.43.118:/brick/test/b4
Brick5: 10.70.43.118:/brick/test/b5
Brick6: 10.70.43.118:/brick/test/b6
Brick7: 10.70.42.64:/brick/test/b7
Brick8: 10.70.42.64:/brick/test/b8
Brick9: 10.70.42.64:/brick/test/b9
Brick10: 10.70.42.64:/brick/test/b10
Brick11: 10.70.42.64:/brick/test/b11
Brick12: 10.70.42.64:/brick/test/b12
Brick13: 10.70.43.118:/brick/test/b11
Brick14: 10.70.43.118:/brick/test/b12
Brick15: 10.70.43.118:/brick/test/b13
Brick16: 10.70.43.118:/brick/test/b14
Brick17: 10.70.43.118:/brick/test/b15
Brick18: 10.70.43.118:/brick/test/b16
Brick19: 10.70.42.64:/brick/test/b17
Brick20: 10.70.42.64:/brick/test/b18
Brick21: 10.70.42.64:/brick/test/b19
Brick22: 10.70.42.64:/brick/test/b20
Brick23: 10.70.42.64:/brick/test/b21
Brick24: 10.70.42.64:/brick/test/b22
Options Reconfigured:
cluster.heal-timeout: -12226666666666666666666666666666666666666666666666666666666666666666666666666666
server.event-threads: 2
client.event-threads: 2
features.uss: on
cluster.disperse-self-heal-daemon: enable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
[root@rhs3 ~]# 


Version-Release number of selected component (if applicable):
[root@rhs3 ~]# gluster --version
glusterfs 3.7.1 built on Jun 28 2015 11:01:17
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.



How reproducible:
100%

Steps to Reproduce:
1. Create disperse volume.
2. set cluster.heal-timeout for any unexpected value (large, 0 or negative number)
[root@rhs3 ~]# gluster v start test
volume start: test: success
[root@rhs3 ~]# gluster volume set test cluster.heal-timeout -1222666666666666666666
volume set: success

3. run gluster v info test. It displayes wrong values given on cli

Actual results:
CLI is accepting and setting wrong values for cluster.heal-timeout

Expected results:

There should be a check on the given values. A proper message should be displayed for wrong values provided.

There 


Additional info:

Comment 1 Anand Avati 2015-07-08 07:26:43 UTC
REVIEW: http://review.gluster.org/11573 ( ec : Implement check for the cluster.heal-timeout values       for disperse volume.) posted (#1) for review on master by Ashish Pandey (aspandey)

Comment 2 Niels de Vos 2016-06-16 13:19:46 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.