Bug 1238071 - Quota: Quota Daemon doesn't start after node reboot
Summary: Quota: Quota Daemon doesn't start after node reboot
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
: RHGS 3.1.1
Assignee: Gaurav Kumar Garg
QA Contact: Byreddy
URL:
Whiteboard:
Depends On:
Blocks: 1216951 1223636 1242875 1242882 1251815
TreeView+ depends on / blocked
 
Reported: 2015-07-01 06:59 UTC by Anil Shah
Modified: 2016-09-17 12:42 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.7.1-12
Doc Type: Bug Fix
Doc Text:
Previously, upon restarting glusterd, quota daemon was not started when more than one volume was configured and quota is enabled only on the second volume. With this fix, the quota daemon starts on node reboot.
Clone Of:
: 1242875 (view as bug list)
Environment:
Last Closed: 2015-10-05 07:16:52 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 11:06:22 UTC

Description Anil Shah 2015-07-01 06:59:35 UTC
Description of problem:

After node reboot, Quota Daemon doesn't start

Version-Release number of selected component (if applicable):


glusterfs-api-3.7.1-6.el6rhs

How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 distribute-replicate volume
2. Mount volume as fuse mount
3. Enable quota and set limit-usage
4 Reboot the storage node 

Actual results:

After node reboot, quota daemon didn't start

Expected results:

Quota daemon should start after node reboot

Additional info:

Volume Name: vol0
Type: Distributed-Replicate
Volume ID: f617bb10-06dc-40ac-a0e6-775e3a619184
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.33.214:/rhs/brick1/b001
Brick2: 10.70.33.219:/rhs/brick1/b002
Brick3: 10.70.33.225:/rhs/brick1/b003
Brick4: 10.70.44.13:/rhs/brick1/b004
Options Reconfigured:
features.barrier: disable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
features.show-snapshot-directory: enable
features.uss: enable
performance.readdir-ahead: on
server.allow-insecure: on
cluster.enable-shared-storage: enable

=============================================

Status of volume: vol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.33.214:/rhs/brick1/b001         49152     0          Y       4644 
Brick 10.70.33.219:/rhs/brick1/b002         49152     0          Y       4838 
Brick 10.70.33.225:/rhs/brick1/b003         49152     0          Y       3566 
Brick 10.70.44.13:/rhs/brick1/b004          49152     0          Y       3293 
Snapshot Daemon on localhost                49156     0          Y       15584
NFS Server on localhost                     2049      0          Y       15692
Self-heal Daemon on localhost               N/A       N/A        Y       15700
Quota Daemon on localhost                   N/A       N/A        Y       15710
Snapshot Daemon on 10.70.44.13              49156     0          Y       3963 
NFS Server on 10.70.44.13                   2049      0          Y       4052 
Self-heal Daemon on 10.70.44.13             N/A       N/A        Y       4064 
Quota Daemon on 10.70.44.13                 N/A       N/A        Y       4069 
Snapshot Daemon on 10.70.33.225             49155     0          Y       3582 
NFS Server on 10.70.33.225                  2049      0          Y       3572 
Self-heal Daemon on 10.70.33.225            N/A       N/A        Y       3571 
Quota Daemon on 10.70.33.225                N/A       N/A        N       N/A  
Snapshot Daemon on 10.70.33.219             49156     0          Y       5970 
NFS Server on 10.70.33.219                  2049      0          Y       6069 
Self-heal Daemon on 10.70.33.219            N/A       N/A        Y       6077 
Quota Daemon on 10.70.33.219                N/A       N/A        Y       6087

Comment 4 Anil Shah 2015-07-03 11:21:44 UTC
Editing  the How reproducible:

1/1 Occurrence

Comment 9 Atin Mukherjee 2015-07-07 08:48:24 UTC
Can we close this bug and re-open if it happens again?

Comment 10 Anil Shah 2015-07-07 09:01:41 UTC
Sure, Works for me,

Comment 11 Anil Shah 2015-07-14 09:32:48 UTC
Hit issue again.

Step to reproduce this issue:

volume-create Vol0
volume-start Vol0
volume-create Vol1
volume-start Vol1
Enable quota on vol1 
pkill gluster
service glusterd start

Comment 12 Anil Shah 2015-07-14 09:33:12 UTC
Hit this issue again.

Step to reproduce this issue:

volume-create Vol0
volume-start Vol0
volume-create Vol1
volume-start Vol1
Enable quota on vol1 
pkill gluster
service glusterd start

Comment 14 Anil Shah 2015-07-14 09:36:45 UTC
Automated quota tests are failing on build glusterfs-3.7.1-9.el6rhs.x86_64

https://bugzilla.redhat.com/show_bug.cgi?id=1238071

Comment 15 Atin Mukherjee 2015-07-14 10:25:54 UTC
upstream patch : http://review.gluster.org/#/c/11658/

Comment 19 Atin Mukherjee 2015-08-08 13:32:05 UTC
Upstream patch is already merged.

Comment 21 Gaurav Kumar Garg 2015-08-12 09:07:55 UTC
downstream patch url: https://code.engineering.redhat.com/gerrit/#/c/54970/

Comment 23 Byreddy 2015-08-25 08:23:56 UTC
Verified this bug using the version glusterfs-3.7.1-12.

Steps used to verify:
~~~~~~~~~~~~~~~~~~~~~
1. Created 2*2 volume using cluster of two nodes.
2. Mounted volume as FUSE mount
3. Enabled quota and set the quota limit usage.
4. Rebooted the cluster node.
5. Verified the quota daemon is running OR not after reboot // it was running.

Extra:
~~~~~~
Verified this bug with having multiple volumes in a single cluster, enabling quota for one of the volume and killing gluster (pkill gluster) and finally restarting the glusterd. It worked as per the expectation.

Moving this Bug to verified state

Comment 25 errata-xmlrpc 2015-10-05 07:16:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html


Note You need to log in before you can comment on or make changes to this bug.