Bug 1175755
| Summary: | SNAPSHOT[USS]:gluster volume set for uss doesnot check any boundaries | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Vijaikumar Mallikarjuna <vmallika> |
| Component: | glusterd | Assignee: | bugs <bugs> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.6.1 | CC: | bugs, gluster-bugs, nsathyan, rabhat, rhs-bugs, smohan, ssamanta, vmallika, vshastry |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | USS | ||
| Fixed In Version: | glusterfs-3.6.2 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1111554 | Environment: | |
| Last Closed: | 2015-02-11 09:11:23 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1111534, 1111554 | ||
| Bug Blocks: | 1163723 | ||
|
Comment 1
Anand Avati
2014-12-19 07:20:12 UTC
REVIEW: http://review.gluster.org/9304 (mgmt/glusterd: Validate the options of uss.) posted (#2) for review on release-3.6 by Sachin Pandit (spandit) REVIEW: http://review.gluster.org/9304 (mgmt/glusterd: Validate the options of uss) posted (#3) for review on release-3.6 by Sachin Pandit (spandit) COMMIT: http://review.gluster.org/9304 committed in release-3.6 by Raghavendra Bhat (raghavendra) ------ commit 2acbc361698b4cd55211011b93a1b4bba9ff72f0 Author: vmallika <vmallika> Date: Fri Nov 14 12:06:39 2014 +0530 mgmt/glusterd: Validate the options of uss Change-Id: Id13dc4cd3f5246446a9dfeabc9caa52f91477524 BUG: 1175755 Signed-off-by: Varun Shastry <vshastry> Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/8133 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas> Signed-off-by: Sachin Pandit <spandit> Reviewed-on: http://review.gluster.org/9304 Reviewed-by: Raghavendra Bhat <raghavendra> Description of problem: gluster volume set <vol-name> features.uss doesnot check any boundary value. How reproducible: 100% Steps to Reproduce: 1.gluster volume set <vol-name> features.uss <invalid-value> 2. 3. Actual results: It doesnot give any error Expected results: If invalid value is passed the 'gluster volume set" command should through acceptable values and command should fail. Additional info: [root@snapshot09 ~]# gluster volume info Volume Name: newvol Type: Distributed-Replicate Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hostname1:/brick2/newvol Brick2: hostname2:/brick2/newvol Brick3: hostname3:/brick2/newvol Brick4: hostname4:/brick2/newvol Options Reconfigured: performance.readdir-ahead: on features.barrier: disable features.uss: enable auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 Volume Name: testvol Type: Distributed-Replicate Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676 Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hostname1:/brick1/testvol Brick2: hostname2:/brick1/testvol Brick3: hostname3:/brick1/testvol Brick4: hostname4:/brick1/testvol Options Reconfigured: performance.readdir-ahead: on features.barrier: disable features.uss: enable auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 [root@snapshot09 ~]# gluster volume set features.uss sfdddsdds Usage: volume set <VOLNAME> <KEY> <VALUE> [root@snapshot09 ~]# gluster volume set testvol features.uss sfdddsdds volume set: success [root@snapshot09 ~]# gluster volume info Volume Name: newvol Type: Distributed-Replicate Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hostname1:/brick2/newvol Brick2: hostname2:/brick2/newvol Brick3: hostname3:/brick2/newvol Brick4: hostname4:/brick2/newvol Options Reconfigured: performance.readdir-ahead: on features.barrier: disable features.uss: enable auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 Volume Name: testvol Type: Distributed-Replicate Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676 Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hostname1:/brick1/testvol Brick2: hostname2:/brick1/testvol Brick3: hostname3:/brick1/testvol Brick4: hostname4:/brick1/testvol Options Reconfigured: performance.readdir-ahead: on features.barrier: disable features.uss: sfdddsdds auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 [root@snapshot09 ~]# gluster volume set testvol features.uss -1 volume set: success [root@snapshot09 ~]# gluster volume info Volume Name: newvol Type: Distributed-Replicate Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hostname1:/brick2/newvol Brick2: hostname2:/brick2/newvol Brick3: hostname3:/brick2/newvol Brick4: hostname4:/brick2/newvol Options Reconfigured: performance.readdir-ahead: on features.barrier: disable features.uss: enable auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 Volume Name: testvol Type: Distributed-Replicate Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676 Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hostname1:/brick1/testvol Brick2: hostname2:/brick1/testvol Brick3: hostname3:/brick1/testvol Brick4: hostname4:/brick1/testvol Options Reconfigured: performance.readdir-ahead: on features.barrier: disable features.uss: -1 auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 [root@snapshot09 ~]# gluster volume set testvol features.uss @@@@@@@@@@@@@@@#$#$ns volume set: success [root@snapshot09 ~]# gluster volume info Volume Name: newvol Type: Distributed-Replicate Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hostname1:/brick2/newvol Brick2: hostname2:/brick2/newvol Brick3: hostname3:/brick2/newvol Brick4: hostname4:/brick2/newvol Options Reconfigured: performance.readdir-ahead: on features.barrier: disable features.uss: enable auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 Volume Name: testvol Type: Distributed-Replicate Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676 Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hostname1:/brick1/testvol Brick2: hostname2:/brick1/testvol Brick3: hostname3:/brick1/testvol Brick4: hostname4:/brick1/testvol Options Reconfigured: performance.readdir-ahead: on features.barrier: disable features.uss: @@@@@@@@@@@@@@@#0 auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 [root@snapshot09 ~]# [root@snapshot09 ~]# gluster volume set testvol features.uss @@@@@@@@@@@@@@@#$#$ns volume set: success [root@snapshot09 ~]# echo $? 0 [root@snapshot09 ~]# This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.2, please reopen this bug report. glusterfs-3.6.2 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.6.2. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978 [2] http://news.gmane.org/gmane.comp.file-systems.gluster.user [3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 |