Description of problem: I create a disperse 3 redundancy 1 volume, start, and mount. then I exec "gluster vol clear-locks vol-name path kind all inode", return IO error, and all bricks are down(core dump). I test a dht volume, that's OK. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/9440 (ec: Don't use inodelk on getxattr when clearing locks) posted (#1) for review on master by Xavier Hernandez (xhernandez)
This patch should solve the problem. However the error shouldn't have caused a crash in glusterfsd processes. I'll take a look to that.
REVIEW: http://review.gluster.org/9440 (ec: Don't use inodelk on getxattr when clearing locks) posted (#2) for review on master by Xavier Hernandez (xhernandez)
COMMIT: http://review.gluster.org/9440 committed in master by Vijay Bellur (vbellur) ------ commit 4f734b04694feabe047d758c2a0a6cd8ce5fc450 Author: Xavier Hernandez <xhernandez> Date: Tue Jan 13 10:50:06 2015 +0100 ec: Don't use inodelk on getxattr when clearing locks When command 'clear-locks' from cli is executed, a getxattr request is received by ec. This request was handled as usual, first locking the inode. Once this request was processed by the bricks, all locks were removed, including the lock used by ec. When ec tried to unlock the previously acquired lock (which was already released), caused a crash in glusterfsd. This fix executes the getxattr request without any lock acquired for the clear-locks command. Change-Id: I77e550d13c4673d2468a1e13fe6e2fed20e233c6 BUG: 1179050 Signed-off-by: Xavier Hernandez <xhernandez> Reviewed-on: http://review.gluster.org/9440 Reviewed-by: Dan Lambright <dlambrig> Tested-by: Gluster Build System <jenkins.com>
The crash in glusterfsd is caused by a bug in locks xlator. This will be addressed in another bug.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user