Description of problem: Avoid using spinlocks on single core machines Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/13432 (lock: use spinlock only on multicore systems) posted (#1) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/13432 (lock: use spinlock only on multicore systems) posted (#2) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/13432 (lock: use spinlock only on multicore systems) posted (#3) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/13432 (lock: use spinlock only on multicore systems) posted (#4) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/13432 (lock: use spinlock only on multicore systems) posted (#5) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/13432 (lock: use spinlock only on multicore systems) posted (#6) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/13432 (lock: use spinlock only on multicore systems) posted (#7) for review on master by Prasanna Kumar Kalever (pkalever)
COMMIT: http://review.gluster.org/13432 committed in master by Jeff Darcy (jdarcy) ------ commit 7e44c783ad731856956929f6614bbe045c26ea3a Author: Prasanna Kumar Kalever <prasanna.kalever> Date: Thu Feb 11 23:45:37 2016 +0530 lock: use spinlock only on multicore systems Using spinlocks on a single-core system makes usually no meaning, since as long as the spinlock polling is blocking the only available CPU core, no other thread can run and since no other thread can run, the lock won't be unlocked until its time quantum expires and it gets de-scheduled. In other words, a spinlock wastes CPU time on those systems for no real benefit. If the thread was put to sleep instead, another thread could have ran at once, possibly unlocking the lock and then allowing the first thread to continue processing, once it woke up again. Change-Id: I0ffc14e26c2e150b564bcb682a576859ab1d1872 BUG: 1306807 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever> Reviewed-on: http://review.gluster.org/13432 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy>
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user