Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1238071 - Quota: Quota Daemon doesn't start after node reboot
Quota: Quota Daemon doesn't start after node reboot
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: quota (Show other bugs)
3.1
x86_64 Linux
medium Severity low
: ---
: RHGS 3.1.1
Assigned To: Gaurav Kumar Garg
Byreddy
: Regression, ZStream
Depends On:
Blocks: 1216951 1223636 1242875 1242882 1251815
  Show dependency treegraph
 
Reported: 2015-07-01 02:59 EDT by Anil Shah
Modified: 2016-09-17 08:42 EDT (History)
13 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-12
Doc Type: Bug Fix
Doc Text:
Previously, upon restarting glusterd, quota daemon was not started when more than one volume was configured and quota is enabled only on the second volume. With this fix, the quota daemon starts on node reboot.
Story Points: ---
Clone Of:
: 1242875 (view as bug list)
Environment:
Last Closed: 2015-10-05 03:16:52 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 07:06:22 EDT

  None (edit)
Description Anil Shah 2015-07-01 02:59:35 EDT
Description of problem:

After node reboot, Quota Daemon doesn't start

Version-Release number of selected component (if applicable):


glusterfs-api-3.7.1-6.el6rhs

How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 distribute-replicate volume
2. Mount volume as fuse mount
3. Enable quota and set limit-usage
4 Reboot the storage node 

Actual results:

After node reboot, quota daemon didn't start

Expected results:

Quota daemon should start after node reboot

Additional info:

Volume Name: vol0
Type: Distributed-Replicate
Volume ID: f617bb10-06dc-40ac-a0e6-775e3a619184
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.33.214:/rhs/brick1/b001
Brick2: 10.70.33.219:/rhs/brick1/b002
Brick3: 10.70.33.225:/rhs/brick1/b003
Brick4: 10.70.44.13:/rhs/brick1/b004
Options Reconfigured:
features.barrier: disable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
features.show-snapshot-directory: enable
features.uss: enable
performance.readdir-ahead: on
server.allow-insecure: on
cluster.enable-shared-storage: enable

=============================================

Status of volume: vol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.33.214:/rhs/brick1/b001         49152     0          Y       4644 
Brick 10.70.33.219:/rhs/brick1/b002         49152     0          Y       4838 
Brick 10.70.33.225:/rhs/brick1/b003         49152     0          Y       3566 
Brick 10.70.44.13:/rhs/brick1/b004          49152     0          Y       3293 
Snapshot Daemon on localhost                49156     0          Y       15584
NFS Server on localhost                     2049      0          Y       15692
Self-heal Daemon on localhost               N/A       N/A        Y       15700
Quota Daemon on localhost                   N/A       N/A        Y       15710
Snapshot Daemon on 10.70.44.13              49156     0          Y       3963 
NFS Server on 10.70.44.13                   2049      0          Y       4052 
Self-heal Daemon on 10.70.44.13             N/A       N/A        Y       4064 
Quota Daemon on 10.70.44.13                 N/A       N/A        Y       4069 
Snapshot Daemon on 10.70.33.225             49155     0          Y       3582 
NFS Server on 10.70.33.225                  2049      0          Y       3572 
Self-heal Daemon on 10.70.33.225            N/A       N/A        Y       3571 
Quota Daemon on 10.70.33.225                N/A       N/A        N       N/A  
Snapshot Daemon on 10.70.33.219             49156     0          Y       5970 
NFS Server on 10.70.33.219                  2049      0          Y       6069 
Self-heal Daemon on 10.70.33.219            N/A       N/A        Y       6077 
Quota Daemon on 10.70.33.219                N/A       N/A        Y       6087
Comment 4 Anil Shah 2015-07-03 07:21:44 EDT
Editing  the How reproducible:

1/1 Occurrence
Comment 9 Atin Mukherjee 2015-07-07 04:48:24 EDT
Can we close this bug and re-open if it happens again?
Comment 10 Anil Shah 2015-07-07 05:01:41 EDT
Sure, Works for me,
Comment 11 Anil Shah 2015-07-14 05:32:48 EDT
Hit issue again.

Step to reproduce this issue:

volume-create Vol0
volume-start Vol0
volume-create Vol1
volume-start Vol1
Enable quota on vol1 
pkill gluster
service glusterd start
Comment 12 Anil Shah 2015-07-14 05:33:12 EDT
Hit this issue again.

Step to reproduce this issue:

volume-create Vol0
volume-start Vol0
volume-create Vol1
volume-start Vol1
Enable quota on vol1 
pkill gluster
service glusterd start
Comment 14 Anil Shah 2015-07-14 05:36:45 EDT
Automated quota tests are failing on build glusterfs-3.7.1-9.el6rhs.x86_64

https://bugzilla.redhat.com/show_bug.cgi?id=1238071
Comment 15 Atin Mukherjee 2015-07-14 06:25:54 EDT
upstream patch : http://review.gluster.org/#/c/11658/
Comment 19 Atin Mukherjee 2015-08-08 09:32:05 EDT
Upstream patch is already merged.
Comment 21 Gaurav Kumar Garg 2015-08-12 05:07:55 EDT
downstream patch url: https://code.engineering.redhat.com/gerrit/#/c/54970/
Comment 23 Byreddy 2015-08-25 04:23:56 EDT
Verified this bug using the version glusterfs-3.7.1-12.

Steps used to verify:
~~~~~~~~~~~~~~~~~~~~~
1. Created 2*2 volume using cluster of two nodes.
2. Mounted volume as FUSE mount
3. Enabled quota and set the quota limit usage.
4. Rebooted the cluster node.
5. Verified the quota daemon is running OR not after reboot // it was running.

Extra:
~~~~~~
Verified this bug with having multiple volumes in a single cluster, enabling quota for one of the volume and killing gluster (pkill gluster) and finally restarting the glusterd. It worked as per the expectation.

Moving this Bug to verified state
Comment 25 errata-xmlrpc 2015-10-05 03:16:52 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html

Note You need to log in before you can comment on or make changes to this bug.