Description of problem: When a fop starts a subfop as part of its own execution, it's possible that this subfop finishes before the manager has completed the processing of the current state. In this case ec_resume() is called and a new instance of the same state machine is executed by another thread. This could cause multiple problems. Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/11317 (cluster/ec: Avoid parallel executions of the same state machine) posted (#1) for review on master by Xavier Hernandez (xhernandez)
REVIEW: http://review.gluster.org/11317 (cluster/ec: Avoid parallel executions of the same state machine) posted (#2) for review on master by Xavier Hernandez (xhernandez)
REVIEW: http://review.gluster.org/11317 (cluster/ec: Avoid parallel executions of the same state machine) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/11317 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 4442449f1436e47c84c55c3f0d8f1a8b248db4b6 Author: Xavier Hernandez <xhernandez> Date: Thu Jun 18 16:44:55 2015 +0200 cluster/ec: Avoid parallel executions of the same state machine In very rare circumstances it was possible that a subfop started by another fop could finish fast enough to cause that two or more instances of the same state machine be executing at the same time. Change-Id: I319924a18bd3f88115e751a66f8f4560435e0e0e BUG: 1233258 Signed-off-by: Xavier Hernandez <xhernandez> Reviewed-on: http://review.gluster.org/11317 Tested-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user