Bug 1111554 - SNAPSHOT[USS]:gluster volume set for uss doesnot check any boundaries
Summary: SNAPSHOT[USS]:gluster volume set for uss doesnot check any boundaries
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Nagaprasad Sathyanarayana
QA Contact:
URL:
Whiteboard: USS
Depends On: 1111534
Blocks: 1175755
TreeView+ depends on / blocked
 
Reported: 2014-06-20 10:27 UTC by vpshastry
Modified: 2016-02-18 00:20 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1111534
: 1175755 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:26:01 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description vpshastry 2014-06-20 10:27:41 UTC
+++ This bug was initially created as a clone of Bug #1111534 +++

Description of problem:
gluster volume set <vol-name> features.uss doesnot check any boundary value.

How reproducible:
100%

Steps to Reproduce:
1.gluster volume set <vol-name> features.uss <invalid-value>
2.
3.

Actual results:
It doesnot give any error

Expected results:
If invalid value is passed the 'gluster volume set" command should through acceptable values and command should fail.


Additional info:
[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.44.62:/brick2/newvol
Brick2: 10.70.44.63:/brick2/newvol
Brick3: 10.70.44.64:/brick2/newvol
Brick4: 10.70.44.65:/brick2/newvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.44.62:/brick1/testvol
Brick2: 10.70.44.63:/brick1/testvol
Brick3: 10.70.44.64:/brick1/testvol
Brick4: 10.70.44.65:/brick1/testvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# gluster volume set features.uss sfdddsdds
Usage: volume set <VOLNAME> <KEY> <VALUE>
[root@snapshot09 ~]# gluster volume set testvol features.uss sfdddsdds
volume set: success
[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.44.62:/brick2/newvol
Brick2: 10.70.44.63:/brick2/newvol
Brick3: 10.70.44.64:/brick2/newvol
Brick4: 10.70.44.65:/brick2/newvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.44.62:/brick1/testvol
Brick2: 10.70.44.63:/brick1/testvol
Brick3: 10.70.44.64:/brick1/testvol
Brick4: 10.70.44.65:/brick1/testvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: sfdddsdds
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# gluster volume set testvol features.uss -1
volume set: success
[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.44.62:/brick2/newvol
Brick2: 10.70.44.63:/brick2/newvol
Brick3: 10.70.44.64:/brick2/newvol
Brick4: 10.70.44.65:/brick2/newvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.44.62:/brick1/testvol
Brick2: 10.70.44.63:/brick1/testvol
Brick3: 10.70.44.64:/brick1/testvol
Brick4: 10.70.44.65:/brick1/testvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: -1
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# gluster volume set testvol features.uss @@@@@@@@@@@@@@@#$#$ns
volume set: success
[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.44.62:/brick2/newvol
Brick2: 10.70.44.63:/brick2/newvol
Brick3: 10.70.44.64:/brick2/newvol
Brick4: 10.70.44.65:/brick2/newvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: enable
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.44.62:/brick1/testvol
Brick2: 10.70.44.63:/brick1/testvol
Brick3: 10.70.44.64:/brick1/testvol
Brick4: 10.70.44.65:/brick1/testvol
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
features.uss: @@@@@@@@@@@@@@@#0
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# 

[root@snapshot09 ~]# gluster volume set testvol features.uss @@@@@@@@@@@@@@@#$#$ns
volume set: success
[root@snapshot09 ~]# echo $?
0
[root@snapshot09 ~]#

Comment 1 Anand Avati 2014-06-20 11:59:03 UTC
REVIEW: http://review.gluster.org/8133 (mgmt/glusterd: Validate the options of uss) posted (#1) for review on master by Varun Shastry (vshastry)

Comment 2 Anand Avati 2014-06-23 06:49:10 UTC
REVIEW: http://review.gluster.org/8133 (mgmt/glusterd: Validate the options of uss) posted (#2) for review on master by Varun Shastry (vshastry)

Comment 5 Anand Avati 2014-11-14 06:37:40 UTC
REVIEW: http://review.gluster.org/8133 (mgmt/glusterd: Validate the options of uss) posted (#3) for review on master by Vijaikumar Mallikarjuna (vmallika)

Comment 6 Anand Avati 2014-11-14 11:38:03 UTC
COMMIT: http://review.gluster.org/8133 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit c3c28ad86be6feb0b148df4681da432047dc0bc3
Author: vmallika <vmallika>
Date:   Fri Nov 14 12:06:39 2014 +0530

    mgmt/glusterd: Validate the options of uss
    
    Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524
    BUG: 1111554
    Signed-off-by: Varun Shastry <vshastry>
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/8133
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 7 Niels de Vos 2015-05-14 17:26:01 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 9 Niels de Vos 2015-05-14 17:37:49 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:42:30 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.