Bug 1175755 - SNAPSHOT[USS]:gluster volume set for uss doesnot check any boundaries
Summary: SNAPSHOT[USS]:gluster volume set for uss doesnot check any boundaries
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.6.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard: USS
Depends On: 1111534 1111554
Blocks: glusterfs-3.6.2
TreeView+ depends on / blocked
 
Reported: 2014-12-18 14:07 UTC by Vijaikumar Mallikarjuna
Modified: 2016-05-11 22:47 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.6.2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1111554
Environment:
Last Closed: 2015-02-11 09:11:23 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Anand Avati 2014-12-19 07:20:12 UTC
REVIEW: http://review.gluster.org/9304 (mgmt/glusterd: Validate the options of uss) posted (#1) for review on release-3.6 by Sachin Pandit (spandit)

Comment 2 Anand Avati 2014-12-19 09:00:04 UTC
REVIEW: http://review.gluster.org/9304 (mgmt/glusterd: Validate the options of uss.) posted (#2) for review on release-3.6 by Sachin Pandit (spandit)

Comment 3 Anand Avati 2014-12-24 07:17:38 UTC
REVIEW: http://review.gluster.org/9304 (mgmt/glusterd: Validate the options of uss) posted (#3) for review on release-3.6 by Sachin Pandit (spandit)

Comment 4 Anand Avati 2014-12-24 11:36:43 UTC
COMMIT: http://review.gluster.org/9304 committed in release-3.6 by Raghavendra Bhat (raghavendra) 
------
commit 2acbc361698b4cd55211011b93a1b4bba9ff72f0
Author: vmallika <vmallika>
Date:   Fri Nov 14 12:06:39 2014 +0530

    mgmt/glusterd: Validate the options of uss
    
    Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524
    BUG: 1175755
    Signed-off-by: Varun Shastry <vshastry>
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/8133
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>
    Signed-off-by: Sachin Pandit <spandit>
    Reviewed-on: http://review.gluster.org/9304
    Reviewed-by: Raghavendra Bhat <raghavendra>

Comment 5 Raghavendra Bhat 2015-01-06 10:30:02 UTC
Description of problem:
gluster volume set <vol-name> features.uss doesnot check any boundary value.

How reproducible:
100%

Steps to Reproduce:
1.gluster volume set <vol-name> features.uss <invalid-value>
2.
3.

Actual results:
It doesnot give any error

Expected results:
If invalid value is passed the 'gluster volume set" command should through acceptable values and command should fail.


Additional info:
[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: hostname1:/brick2/newvol
Brick2: hostname2:/brick2/newvol
Brick3: hostname3:/brick2/newvol
Brick4: hostname4:/brick2/newvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: hostname1:/brick1/testvol
Brick2: hostname2:/brick1/testvol
Brick3: hostname3:/brick1/testvol
Brick4: hostname4:/brick1/testvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# gluster volume set features.uss sfdddsdds
Usage: volume set <VOLNAME> <KEY> <VALUE>
[root@snapshot09 ~]# gluster volume set testvol features.uss sfdddsdds
volume set: success
[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: hostname1:/brick2/newvol
Brick2: hostname2:/brick2/newvol
Brick3: hostname3:/brick2/newvol
Brick4: hostname4:/brick2/newvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: hostname1:/brick1/testvol
Brick2: hostname2:/brick1/testvol
Brick3: hostname3:/brick1/testvol
Brick4: hostname4:/brick1/testvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: sfdddsdds
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# gluster volume set testvol features.uss -1
volume set: success

[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: hostname1:/brick2/newvol
Brick2: hostname2:/brick2/newvol
Brick3: hostname3:/brick2/newvol
Brick4: hostname4:/brick2/newvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: hostname1:/brick1/testvol
Brick2: hostname2:/brick1/testvol
Brick3: hostname3:/brick1/testvol
Brick4: hostname4:/brick1/testvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: -1
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# gluster volume set testvol features.uss @@@@@@@@@@@@@@@#$#$ns
volume set: success
[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: hostname1:/brick2/newvol
Brick2: hostname2:/brick2/newvol
Brick3: hostname3:/brick2/newvol
Brick4: hostname4:/brick2/newvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: hostname1:/brick1/testvol
Brick2: hostname2:/brick1/testvol
Brick3: hostname3:/brick1/testvol
Brick4: hostname4:/brick1/testvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: @@@@@@@@@@@@@@@#0
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# 

[root@snapshot09 ~]# gluster volume set testvol features.uss @@@@@@@@@@@@@@@#$#$ns
volume set: success
[root@snapshot09 ~]# echo $?
0
[root@snapshot09 ~]#

Comment 6 Raghavendra Bhat 2015-02-11 09:11:23 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.2, please reopen this bug report.

glusterfs-3.6.2 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.6.2.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978
[2] http://news.gmane.org/gmane.comp.file-systems.gluster.user
[3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137


Note You need to log in before you can comment on or make changes to this bug.