Bug 1225738 - [Quota] gluster v quota <volname> limit-usage should not accept any zero values
Summary: [Quota] gluster v quota <volname> limit-usage should not accept any zero values
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: ---
Assignee: Manikandan
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1223636 1241376
TreeView+ depends on / blocked
 
Reported: 2015-05-28 07:12 UTC by Anil Shah
Modified: 2016-09-20 04:28 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1241376 (view as bug list)
Environment:
Last Closed: 2015-07-30 09:29:36 UTC
Embargoed:
mselvaga: needinfo-


Attachments (Terms of Use)

Description Anil Shah 2015-05-28 07:12:04 UTC
Description of problem:

when you set limit-usage on root or sub-directory, if should not accept zeros values.

Version-Release number of selected component (if applicable):

[root@darkknight ~]# rpm -qa | grep glusterfs
glusterfs-libs-3.7.0-2.el6rhs.x86_64
glusterfs-cli-3.7.0-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64
glusterfs-geo-replication-3.7.0-2.el6rhs.x86_64
glusterfs-fuse-3.7.0-2.el6rhs.x86_64
glusterfs-api-3.7.0-2.el6rhs.x86_64
glusterfs-3.7.0-2.el6rhs.x86_64
glusterfs-server-3.7.0-2.el6rhs.x86_64


How reproducible:

100%

Steps to Reproduce:
[root@darkknight ~]# gluster v quota vol0 limit-usage / 0
volume quota : success
[root@darkknight ~]# gluster v quota vol0 list
                  Path                   Hard-limit Soft-limit   Used  Available  Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/                                         0Bytes       80%      12.0KB  0Bytes             Yes                  Yes

[root@darkknight ~]# gluster v quota vol0 limit-usage /test  0MB
volume quota : success
[root@darkknight ~]# gluster v quota vol0 list
                  Path                   Hard-limit Soft-limit   Used  Available  Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/                                         0Bytes       80%      12.0KB  0Bytes             Yes                  Yes
/test                                     0Bytes       80%      0Bytes  0Bytes             Yes                  Yes


Actual results:

quota limit-usage command accepts zero as valid value

Expected results:

quota limit-usage command should not accept zero values


Additional info:

[root@darkknight ~]# gluster  v info vol0
 
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: 4c1a4242-08e0-4cf3-830b-91dd5f78e4b8
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.33.214:/rhs/brick1/b1
Brick2: 10.70.33.219:/rhs/brick1/b2
Brick3: 10.70.33.225:/rhs/brick1/b3
Brick4: 10.70.44.13:/rhs/brick1/b4
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
features.uss: enable
performance.readdir-ahead: on

Comment 2 Manikandan 2015-07-09 04:52:35 UTC
(In reply to Anil Shah from comment #0)
> Description of problem:
> 
> when you set limit-usage on root or sub-directory, if should not accept
> zeros values.
> 
> Version-Release number of selected component (if applicable):
> 
> [root@darkknight ~]# rpm -qa | grep glusterfs
> glusterfs-libs-3.7.0-2.el6rhs.x86_64
> glusterfs-cli-3.7.0-2.el6rhs.x86_64
> glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64
> glusterfs-geo-replication-3.7.0-2.el6rhs.x86_64
> glusterfs-fuse-3.7.0-2.el6rhs.x86_64
> glusterfs-api-3.7.0-2.el6rhs.x86_64
> glusterfs-3.7.0-2.el6rhs.x86_64
> glusterfs-server-3.7.0-2.el6rhs.x86_64
> 
> 
> How reproducible:
> 
> 100%
> 
> Steps to Reproduce:
> [root@darkknight ~]# gluster v quota vol0 limit-usage / 0
> volume quota : success
> [root@darkknight ~]# gluster v quota vol0 list
>                   Path                   Hard-limit Soft-limit   Used 
> Available  Soft-limit exceeded? Hard-limit exceeded?
> -----------------------------------------------------------------------------
> ----------------------------------------------
> /                                         0Bytes       80%      12.0KB 
> 0Bytes             Yes                  Yes
> 
> [root@darkknight ~]# gluster v quota vol0 limit-usage /test  0MB
> volume quota : success
> [root@darkknight ~]# gluster v quota vol0 list
>                   Path                   Hard-limit Soft-limit   Used 
> Available  Soft-limit exceeded? Hard-limit exceeded?
> -----------------------------------------------------------------------------
> ----------------------------------------------
> /                                         0Bytes       80%      12.0KB 
> 0Bytes             Yes                  Yes
> /test                                     0Bytes       80%      0Bytes 
> 0Bytes             Yes                  Yes
> 
> 
> Actual results:
> 
> quota limit-usage command accepts zero as valid value
> 
> Expected results:
> 
> quota limit-usage command should not accept zero values
> 
> 
> Additional info:
> 
> [root@darkknight ~]# gluster  v info vol0
>  
> Volume Name: vol0
> Type: Distributed-Replicate
> Volume ID: 4c1a4242-08e0-4cf3-830b-91dd5f78e4b8
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 10.70.33.214:/rhs/brick1/b1
> Brick2: 10.70.33.219:/rhs/brick1/b2
> Brick3: 10.70.33.225:/rhs/brick1/b3
> Brick4: 10.70.44.13:/rhs/brick1/b4
> Options Reconfigured:
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> features.uss: enable
> performance.readdir-ahead: on

Hi,
In case where the user just wants to create volume and restrict creating files on it(may be a test volume to just check) the user could set quota limit to be 0.

Comment 3 Vijaikumar Mallikarjuna 2015-07-30 09:29:36 UTC
As per comment #2, closing bug as 'Not a Bug'


Note You need to log in before you can comment on or make changes to this bug.