Bug 1001573 - [quota] Enabing quota using, 'gluster volume set <vol-name> quota on' doesn't starts quotad
Summary: [quota] Enabing quota using, 'gluster volume set <vol-name> quota on' doesn't...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Amar Tumballi
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-27 10:24 UTC by SATHEESARAN
Modified: 2013-12-19 00:09 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-02 06:08:09 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2013-08-27 10:24:15 UTC
Description of problem:
=======================
While enabling quota using, 'gluster volume set <volname> quota enable' doesn't
started quotad

Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.4.0.20rhsquota5-1.el6rhs.x86_64

How reproducible:
=================
Always

Steps to Reproduce:
===================
1. Create a 2X2 distributed replicate volume
(i.e) gluster volume create <vol-name> replica 2 <brick1> .. <brick4>

2. Start the volume
(i.e) gluster volume start <vol-name>

3. Enable quota using 'volume set' command
(i.e) gluster volume set <vol-name> quota on

Actual results:
===============
quotad should start running

Expected results:
=================
quotad was not running.

Additional info:
================
I used 'gluster volume quota <vol-name> enable' to enable quota on a volume
That worked fine. It started 'quotad'. I expect the same, when I enable
quota through 'gluster volume set'

In this case, when I stop and start the volume, I could see 'quotad' coming up.

Console logs
============
[Tue Aug 27 10:04:11 UTC 2013 root.37.174:~ ] # gluster volume create distrep replica 2 10.70.37.174:/rhs/brick1/dir1 10.70.37.185:/rhs/brick1/dir1 10.70.37.174:/rhs/brick2/dir2 10.70.37.185:/rhs/brick2/dir2
volume create: distrep: success: please start the volume to access data

[Tue Aug 27 10:05:32 UTC 2013 root.37.174:~ ] # gluster volume info
 
Volume Name: distrep
Type: Distributed-Replicate
Volume ID: 60a190f9-1b87-41f6-a8ed-3f05bf242151
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.37.174:/rhs/brick1/dir1
Brick2: 10.70.37.185:/rhs/brick1/dir1
Brick3: 10.70.37.174:/rhs/brick2/dir2
Brick4: 10.70.37.185:/rhs/brick2/dir2
 
[Tue Aug 27 10:05:54 UTC 2013 root.37.174:~ ] # gluster volume start distrep
volume start: distrep: success

[Tue Aug 27 10:06:13 UTC 2013 root.37.174:~ ] # gluster v status
Status of volume: distrep
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.174:/rhs/brick1/dir1                     49155   Y       17970
Brick 10.70.37.185:/rhs/brick1/dir1                     49155   Y       17717
Brick 10.70.37.174:/rhs/brick2/dir2                     49156   Y       17981
Brick 10.70.37.185:/rhs/brick2/dir2                     49156   Y       17728
NFS Server on localhost                                 2049    Y       17993
Self-heal Daemon on localhost                           N/A     Y       17999
NFS Server on 10.70.37.118                              2049    Y       17344
Self-heal Daemon on 10.70.37.118                        N/A     Y       17351
NFS Server on 10.70.37.95                               2049    Y       17292
Self-heal Daemon on 10.70.37.95                         N/A     Y       17299
NFS Server on 10.70.37.185                              2049    Y       17740
Self-heal Daemon on 10.70.37.185                        N/A     Y       17746
 
There are no active volume tasks

[Tue Aug 27 10:06:16 UTC 2013 root.37.174:~ ] # ps aux | grep quotad
root     18049  0.0  0.0 103244   812 pts/1    S+   15:36   0:00 grep quotad

[Tue Aug 27 10:06:23 UTC 2013 root.37.174:~ ] # gluster volume quota distrep enable
volume quota : success

[Tue Aug 27 10:06:42 UTC 2013 root.37.174:~ ] # ps aux | grep quotad
root     18091  1.0  1.5 205448 31660 ?        Ssl  15:36   0:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/7e63030677df5afe2fa7a9f790189502.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off
root     18103  0.0  0.0 103244   812 pts/1    S+   15:36   0:00 grep quotad

[Tue Aug 27 10:06:46 UTC 2013 root.37.174:~ ] # gluster volume quota distrep disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success

[Tue Aug 27 10:07:05 UTC 2013 root.37.174:~ ] # ps aux | grep quotad
root     18119  0.0  0.0 103244   812 pts/1    S+   15:37   0:00 grep quotad

[Tue Aug 27 10:07:12 UTC 2013 root.37.174:~ ] # gluster volume info distrep
 
Volume Name: distrep
Type: Distributed-Replicate
Volume ID: 60a190f9-1b87-41f6-a8ed-3f05bf242151
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.37.174:/rhs/brick1/dir1
Brick2: 10.70.37.185:/rhs/brick1/dir1
Brick3: 10.70.37.174:/rhs/brick2/dir2
Brick4: 10.70.37.185:/rhs/brick2/dir2
Options Reconfigured:
features.quota: off

[Tue Aug 27 10:07:35 UTC 2013 root.37.174:~ ] # gluster volume set distrep quota on
volume set: success

[Tue Aug 27 10:07:59 UTC 2013 root.37.174:~ ] # ps aux | grep quotad
root     18166  0.0  0.0 103244   804 pts/1    R+   15:38   0:00 grep quotad

[Tue Aug 27 10:08:38 UTC 2013 root.37.174:~ ] # gluster volume stop distrep
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: distrep: success

[Tue Aug 27 10:08:57 UTC 2013 root.37.174:~ ] # gluster volume start distrep
volume start: distrep: success

[Tue Aug 27 10:09:06 UTC 2013 root.37.174:~ ] # ps aux | grep quotad
root     18242  0.5  1.4 205448 30348 ?        Ssl  15:39   0:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/7e63030677df5afe2fa7a9f790189502.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off
root     18272  0.0  0.0 103244   808 pts/1    R+   15:39   0:00 grep quotad

[Tue Aug 27 10:09:13 UTC 2013 root.37.174:~ ] # gluster volume set distrep quota off
volume set: success

[Tue Aug 27 10:09:57 UTC 2013 root.37.174:~ ] # ps aux | grep quotad
root     18242  0.0  1.4 205448 30356 ?        Ssl  15:39   0:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/7e63030677df5afe2fa7a9f790189502.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off
root     18311  0.0  0.0 103244   808 pts/1    S+   15:40   0:00 grep quotad

Comment 1 krishnan parthasarathi 2013-09-02 06:08:09 UTC
The volume-set interface is *not* the way to enable quota, in the current design. 
#gluster volume quota VOLNAME enable
is the way to enable to quota on VOLNAME.

Quotad would be running if and only if, there exists at least one started volume, that has quota enabled.

Closing this as not a bug.


Note You need to log in before you can comment on or make changes to this bug.