Bug 1332994 - Self Heal fails on a replica3 volume with 'disk quota exceeded'
Summary: Self Heal fails on a replica3 volume with 'disk quota exceeded'
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1332199
Blocks: 1335283 1335686
TreeView+ depends on / blocked
 
Reported: 2016-05-04 13:33 UTC by Pranith Kumar K
Modified: 2017-03-27 18:22 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.9.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1332199
: 1335283 (view as bug list)
Environment:
Last Closed: 2017-03-27 18:22:53 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-05-04 13:53:18 UTC
REVIEW: http://review.gluster.org/14211 (cluster/afr: Do heals with shd pid) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Vijay Bellur 2016-05-06 03:50:58 UTC
COMMIT: http://review.gluster.org/14211 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit e66add8a304ca610b74ecbbe48cec72dba582340
Author: Pranith Kumar K <pkarampu>
Date:   Wed May 4 19:05:28 2016 +0530

    cluster/afr: Do heals with shd pid
    
    Multi-threaded healing doesn't create synctask with shd pid, this
    leads to healing problems when quota exceeds.
    
    BUG: 1332994
    Change-Id: I80f57c1923756f3298730b8820498127024e1209
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/14211
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Ravishankar N <ravishankar>

Comment 3 Shyamsundar 2017-03-27 18:22:53 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report.

glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.