Bug 1179050 - gluster vol clear-locks vol-name path kind all inode return IO error in a disperse volume
Summary: gluster vol clear-locks vol-name path kind all inode return IO error in a dis...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1181977
TreeView+ depends on / blocked
 
Reported: 2015-01-06 03:02 UTC by jiademing.dd
Modified: 2015-05-14 17:45 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1181977 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:28:49 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description jiademing.dd 2015-01-06 03:02:03 UTC
Description of problem:
   I create a disperse 3 redundancy 1 volume, start, and mount. then I exec "gluster vol clear-locks vol-name path kind all inode", return IO error, and all bricks are down(core dump).

   I test a dht volume, that's OK.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2015-01-13 10:00:57 UTC
REVIEW: http://review.gluster.org/9440 (ec: Don't use inodelk on getxattr when clearing locks) posted (#1) for review on master by Xavier Hernandez (xhernandez)

Comment 2 Xavi Hernandez 2015-01-13 10:03:06 UTC
This patch should solve the problem. However the error shouldn't have caused a crash in glusterfsd processes. I'll take a look to that.

Comment 3 Anand Avati 2015-01-13 12:56:21 UTC
REVIEW: http://review.gluster.org/9440 (ec: Don't use inodelk on getxattr when clearing locks) posted (#2) for review on master by Xavier Hernandez (xhernandez)

Comment 4 Anand Avati 2015-01-19 06:04:45 UTC
COMMIT: http://review.gluster.org/9440 committed in master by Vijay Bellur (vbellur) 
------
commit 4f734b04694feabe047d758c2a0a6cd8ce5fc450
Author: Xavier Hernandez <xhernandez>
Date:   Tue Jan 13 10:50:06 2015 +0100

    ec: Don't use inodelk on getxattr when clearing locks
    
    When command 'clear-locks' from cli is executed, a getxattr request
    is received by ec. This request was handled as usual, first locking
    the inode. Once this request was processed by the bricks, all locks
    were removed, including the lock used by ec.
    
    When ec tried to unlock the previously acquired lock (which was
    already released), caused a crash in glusterfsd.
    
    This fix executes the getxattr request without any lock acquired
    for the clear-locks command.
    
    Change-Id: I77e550d13c4673d2468a1e13fe6e2fed20e233c6
    BUG: 1179050
    Signed-off-by: Xavier Hernandez <xhernandez>
    Reviewed-on: http://review.gluster.org/9440
    Reviewed-by: Dan Lambright <dlambrig>
    Tested-by: Gluster Build System <jenkins.com>

Comment 5 Xavi Hernandez 2015-01-19 08:57:37 UTC
The crash in glusterfsd is caused by a bug in locks xlator. This will be addressed in another bug.

Comment 6 Niels de Vos 2015-05-14 17:28:49 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:35:47 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:38:09 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 9 Niels de Vos 2015-05-14 17:45:25 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.