Bug 1243798 - quota/marker: dir count in inode quota is not atomic
Summary: quota/marker: dir count in inode quota is not atomic
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Manikandan
QA Contact:
Depends On: 1243797
Blocks: 1270769
TreeView+ depends on / blocked
Reported: 2015-07-16 10:47 UTC by Vijaikumar Mallikarjuna
Modified: 2016-06-16 13:24 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1243797
: 1270769 (view as bug list)
Last Closed: 2016-06-16 13:24:08 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Comment 1 Anand Avati 2015-07-16 10:56:55 UTC
REVIEW: http://review.gluster.org/11694 (quota/marker: dir_count accounting is not atomic) posted (#1) for review on master by Vijaikumar Mallikarjuna (vmallika@redhat.com)

Comment 2 Vijay Bellur 2015-10-12 10:14:19 UTC
COMMIT: http://review.gluster.org/11694 committed in master by Raghavendra G (rgowdapp@redhat.com) 
commit d4bd690adae7ce69594c3322d0d7a8e3cb4f7303
Author: vmallika <vmallika@redhat.com>
Date:   Wed Oct 7 15:24:46 2015 +0530

    quota/marker: dir_count accounting is not atomic
    Consider below scenario:
    Quota enabled on pre-existing data
    Now quota-crawl process will start healing xattrs
    Now if write is performed where healing is not complete, there is a
    possibility that 'update txn' is started before 'create xattr txn', in
    this case dir count can be missed on a dir where quota size xattr is not
    yet created.
    Solution is to get size xattr and if xattr is missing, add 1 for
    dir_count, this requires one additional fop if done in marker during
    each update iteration
    Better solution is to us xattrop GF_XATTROP_ADD_ARRAY64_WITH_DEFAULT
    Change-Id: Idc8978860a3914e70c98f96effeff52e9a24e6ba
    BUG: 1243798
    Signed-off-by: vmallika <vmallika@redhat.com>
    Reviewed-on: http://review.gluster.org/11694
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Raghavendra G <rgowdapp@redhat.com>

Comment 5 Niels de Vos 2016-06-16 13:24:08 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.