Description of problem: After node reboot, Quota Daemon doesn't start Version-Release number of selected component (if applicable): glusterfs-api-3.7.1-6.el6rhs How reproducible: 100% Steps to Reproduce: 1. Create 2*2 distribute-replicate volume 2. Mount volume as fuse mount 3. Enable quota and set limit-usage 4 Reboot the storage node Actual results: After node reboot, quota daemon didn't start Expected results: Quota daemon should start after node reboot Additional info: Volume Name: vol0 Type: Distributed-Replicate Volume ID: f617bb10-06dc-40ac-a0e6-775e3a619184 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.33.214:/rhs/brick1/b001 Brick2: 10.70.33.219:/rhs/brick1/b002 Brick3: 10.70.33.225:/rhs/brick1/b003 Brick4: 10.70.44.13:/rhs/brick1/b004 Options Reconfigured: features.barrier: disable features.quota-deem-statfs: on features.inode-quota: on features.quota: on features.show-snapshot-directory: enable features.uss: enable performance.readdir-ahead: on server.allow-insecure: on cluster.enable-shared-storage: enable ============================================= Status of volume: vol0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.33.214:/rhs/brick1/b001 49152 0 Y 4644 Brick 10.70.33.219:/rhs/brick1/b002 49152 0 Y 4838 Brick 10.70.33.225:/rhs/brick1/b003 49152 0 Y 3566 Brick 10.70.44.13:/rhs/brick1/b004 49152 0 Y 3293 Snapshot Daemon on localhost 49156 0 Y 15584 NFS Server on localhost 2049 0 Y 15692 Self-heal Daemon on localhost N/A N/A Y 15700 Quota Daemon on localhost N/A N/A Y 15710 Snapshot Daemon on 10.70.44.13 49156 0 Y 3963 NFS Server on 10.70.44.13 2049 0 Y 4052 Self-heal Daemon on 10.70.44.13 N/A N/A Y 4064 Quota Daemon on 10.70.44.13 N/A N/A Y 4069 Snapshot Daemon on 10.70.33.225 49155 0 Y 3582 NFS Server on 10.70.33.225 2049 0 Y 3572 Self-heal Daemon on 10.70.33.225 N/A N/A Y 3571 Quota Daemon on 10.70.33.225 N/A N/A N N/A Snapshot Daemon on 10.70.33.219 49156 0 Y 5970 NFS Server on 10.70.33.219 2049 0 Y 6069 Self-heal Daemon on 10.70.33.219 N/A N/A Y 6077 Quota Daemon on 10.70.33.219 N/A N/A Y 6087
Editing the How reproducible: 1/1 Occurrence
Can we close this bug and re-open if it happens again?
Sure, Works for me,
Hit issue again. Step to reproduce this issue: volume-create Vol0 volume-start Vol0 volume-create Vol1 volume-start Vol1 Enable quota on vol1 pkill gluster service glusterd start
Hit this issue again. Step to reproduce this issue: volume-create Vol0 volume-start Vol0 volume-create Vol1 volume-start Vol1 Enable quota on vol1 pkill gluster service glusterd start
Automated quota tests are failing on build glusterfs-3.7.1-9.el6rhs.x86_64 https://bugzilla.redhat.com/show_bug.cgi?id=1238071
upstream patch : http://review.gluster.org/#/c/11658/
Upstream patch is already merged.
downstream patch url: https://code.engineering.redhat.com/gerrit/#/c/54970/
Verified this bug using the version glusterfs-3.7.1-12. Steps used to verify: ~~~~~~~~~~~~~~~~~~~~~ 1. Created 2*2 volume using cluster of two nodes. 2. Mounted volume as FUSE mount 3. Enabled quota and set the quota limit usage. 4. Rebooted the cluster node. 5. Verified the quota daemon is running OR not after reboot // it was running. Extra: ~~~~~~ Verified this bug with having multiple volumes in a single cluster, enabling quota for one of the volume and killing gluster (pkill gluster) and finally restarting the glusterd. It worked as per the expectation. Moving this Bug to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html