Description of problem: When ever quota is full on sub-directory , adding new bricks on volume, quota doesn't allow new directory to be created on new added bricks. How reproducible: 100% Steps to Reproduce: 1. Create 2*2 dist-rep volume 2. Do fuse mount 3. Enable quota and set limit on sub-directory 4. Create data on sub-directory so that so quota is hit 5. add-brick on volume 6. do lookup on client mount Actual results: [root@dhcp46-239 fuse]# ll ls: cannot access newnfs: No such file or directory total 4 drwxr-xr-x. 2 root root 4096 Jul 24 16:21 newfuse d?????????? ? ? ? ? ? newnfs Expected results: lookup on client should be successful
REVIEW: https://review.gluster.org/18554 (Quota: heal directory on newly added bricks when quota limit is reached) posted (#2) for review on master by sanoj-unnikrishnan (sunnikri)
REVIEW: https://review.gluster.org/18554 (Quota: heal directory on newly added bricks when quota limit is reached) posted (#3) for review on master by sanoj-unnikrishnan (sunnikri)
REVIEW: https://review.gluster.org/19650 (Quota: heal directory on newly added bricks when quota limit is reached) posted (#1) for review on master by sanoj-unnikrishnan
REVIEW: https://review.gluster.org/18554 (Quota: heal directory on newly added bricks when quota limit is reached) posted (#9) for review on master by sanoj-unnikrishnan
COMMIT: https://review.gluster.org/18554 committed in master by "Raghavendra G" <rgowdapp> with a commit message- Quota: heal directory on newly added bricks when quota limit is reached Problem: if a lookup is done on a newly added brick for a path on which limit has been reached, the lookup fails to heal the directory tree due to quota. Solution: Tag the lookup as an internal fop and ignore it in quota. Since marking internal fop does not usually give enough contextual information. Introducing new flags to pass the contextual info. Adding dict_check_flag and dict_set_flag to aid flag operations. A flag is a single bit in a bit array (currently limited to 256 bits). Change-Id: Ifb6a68bcaffedd425dd0f01f7db24edd5394c095 fixes: bz#1505355 BUG: 1505355 Signed-off-by: Sanoj Unnikrishnan <sunnikri>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/