Primary goal is to support more bricks/volumes by reducing port and memory consumption. Secondarily, this could improve performance in the long term - though so far performance is degraded slightly. Feature page is here: https://github.com/gluster/glusterfs-specs/blob/master/under_review/multiplexing.md (note that this might move to a different directory as the feature moves through the upstream feature approval/tracking process)
REVIEW: http://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#4) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#5) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#6) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15643 (io-threads: do global scaling instead of per-instance) posted (#4) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15643 (io-threads: do global scaling instead of per-instance) posted (#5) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15643 (io-threads: do global scaling instead of per-instance) posted (#6) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15643 (io-threads: do global scaling instead of per-instance) posted (#7) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15643 (io-threads: do global scaling instead of per-instance) posted (#8) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#7) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#22) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#8) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#9) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#23) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#24) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#25) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#26) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#27) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#28) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#29) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#30) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#31) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#32) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#33) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#34) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#35) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#36) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#37) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#38) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#39) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#40) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#41) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#42) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#43) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#10) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#44) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#45) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#46) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#5) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#6) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#7) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#8) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#47) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#48) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#9) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#49) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#10) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#50) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#11) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#51) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#12) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#52) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#13) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#53) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#14) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#15) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#54) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#55) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#56) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#16) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#59) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#60) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#61) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#62) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#63) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#64) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#65) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#66) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#67) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#68) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#69) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#70) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#71) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#18) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#72) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16363 (multiple: combo patch with multiplexing enabled) posted (#19) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#73) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#74) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#11) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/14763 (core: run many bricks within one glusterfsd process) posted (#75) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#12) for review on master by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/14763 committed in master by Vijay Bellur (vbellur) ------ commit 1a95fc3036db51b82b6a80952f0908bc2019d24a Author: Jeff Darcy <jdarcy> Date: Thu Dec 8 16:24:15 2016 -0500 core: run many bricks within one glusterfsd process This patch adds support for multiple brick translator stacks running in a single brick server process. This reduces our per-brick memory usage by approximately 3x, and our appetite for TCP ports even more. It also creates potential to avoid process/thread thrashing, and to improve QoS by scheduling more carefully across the bricks, but realizing that potential will require further work. Multiplexing is controlled by the "cluster.brick-multiplex" global option. By default it's off, and bricks are started in separate processes as before. If multiplexing is enabled, then *compatible* bricks (mostly those with the same transport options) will be started in the same process. Change-Id: I45059454e51d6f4cbb29a4953359c09a408695cb BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/14763 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Vijay Bellur <vbellur>
REVIEW: https://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#13) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16509 (glusterd: double-check whether brick is alive for stats) posted (#2) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/15645 (libglusterfs: make memory pools more thread-friendly) posted (#14) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16510 (socket: retry connect immediately if it fails) posted (#2) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16527 (tests: fix online_brick_count for multiplexing) posted (#1) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16527 (tests: fix online_brick_count for multiplexing) posted (#2) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16528 (tests: use kill_brick instead of kill -9) posted (#2) for review on master by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/16509 committed in master by Atin Mukherjee (amukherj) ------ commit 72452e07fcf91627afb6d45b921dfefd2610686f Author: Jeff Darcy <jdarcy> Date: Wed Feb 1 21:54:30 2017 -0500 glusterd: double-check whether brick is alive for stats With multiplexing, our tests detach bricks from their host processes without glusterd being involved. Thus, when we ask glusterd to fetch profile info, it will try to fetch from a brick that's actually not present any more. While it can handle the process being dead and its RPC connection being closed, it barfs if it gets a negative response from a live brick process. This is not a problem in normal use, because the brick can't disappear without glusterd seeing it. The fix is to double check that the brick is actually running, by looking for its pidfile which the tests *do* clean up as part of killing a brick. Change-Id: I098465b175ecf23538bd7207357c752a2bba8f4e BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16509 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj>
REVIEW: https://review.gluster.org/16529 (glusterd: double-check brick liveness for remove-brick validation) posted (#1) for review on master by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/15645 committed in master by Shyamsundar Ranganathan (srangana) ------ commit ae47befebeda2de5fd2d706090cbacf4ef60c785 Author: Jeff Darcy <jdarcy> Date: Fri Oct 14 10:04:07 2016 -0400 libglusterfs: make memory pools more thread-friendly Early multiplexing tests revealed *massive* contention on certain pools' global locks - especially for dictionaries and secondarily for call stubs. For the thread counts that multiplexing can create, a more lock-free solution is clearly needed. Also, the current mem-pool implementation does a poor job releasing memory back to the system, artificially inflating memory usage to match whatever the worst case was since the process started. This is bad in general, but especially so for multiplexing where there are more pools and a major point of the whole exercise is to reduce memory consumption. The basic ideas for the new design are these There is one pool, globally, for each power-of-two size range. Every attempt to create a new pool within this range will instead add a reference to the existing pool. Instead of adding pools for each translator within each multiplexed brick (potentially infinite and quite possibly thousands), we allocate one set of size-based pools per *thread* (hundreds at worst). Each per-thread pool is divided into hot and cold lists. Every allocation first attempts to use the hot list, then the cold list. When objects are freed, they always go on the hot list. There is one global "pool sweeper" thread, which periodically reclaims everything in each pool's cold list and then "demotes" the current hot list to be the new cold list. For normal allocation activity, only a per-thread lock need be taken, and even that only to guard against very rare contention from the pool sweeper. When threads start and stop, a global lock must be taken to add them to the pool sweeper's list. Lock contention is therefore extremely low, and the hot/cold lists also provide good locality. A more complete explanation (of a similar earlier design) can be found here: http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html Change-Id: I5bc8a1ba57cfb553998f979a498886e0d006e665 BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/15645 Reviewed-by: Xavier Hernandez <xhernandez> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Shyamsundar Ranganathan <srangana>
REVIEW: https://review.gluster.org/16527 (tests: fix online_brick_count for multiplexing) posted (#3) for review on master by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/16510 committed in master by Shyamsundar Ranganathan (srangana) ------ commit 5a57c1592a34ee6632ca1fb38e076dde381d1ae2 Author: Jeff Darcy <jdarcy> Date: Wed Feb 1 22:00:32 2017 -0500 socket: retry connect immediately if it fails Previously we relied on a complex dance of setting flags, shutting down the socket, tearing stuff down, getting an event, tearing more stuff down, and waiting for a higher-level retry. What we really need, in the case where we're just trying to connect prematurely e.g. to a brick that hasn't fully come up yet, is a simple retry of the connect(2) call. This was discovered by observing failures in ec-new-entry.t with multiplexing enabled, but probably fixes other random failures as well. Change-Id: Ibedb8942060bccc96b02272a333c3002c9b77d4c BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16510 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Shyamsundar Ranganathan <srangana>
COMMIT: https://review.gluster.org/16528 committed in master by Jeff Darcy (jdarcy) ------ commit 1e849f616f6ae2f97ab307abaff69182c87571ca Author: Jeff Darcy <jdarcy> Date: Thu Feb 2 11:41:06 2017 -0500 tests: use kill_brick instead of kill -9 The system actually handles this OK, but with multiplexing the result of killing the whole process is not what some tests assumed. Change-Id: I89ebf0039ab1369f25b0bfec3710ec4c13725915 BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16528 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Vijay Bellur <vbellur> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org>
COMMIT: https://review.gluster.org/16529 committed in master by Atin Mukherjee (amukherj) ------ commit b4aac5d334d5102ef3b935a200502cfcd63bd12c Author: Jeff Darcy <jdarcy> Date: Thu Feb 2 13:08:04 2017 -0500 glusterd: double-check brick liveness for remove-brick validation Same problem as https://review.gluster.org/#/c/16509/ in a different place. Tests detach bricks without glusterd's knowledge, so glusterd's internal brick state is out of date and we have to re-check (via the brick's pidfile) as well. BUG: 1385758 Change-Id: I169538c1c62d72a685a49d57ef65fb6c3db6eab2 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16529 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj>
REVIEW: https://review.gluster.org/16544 (tests: keep snapshot bricks separate from regular ones) posted (#2) for review on master by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/16527 committed in master by Vijay Bellur (vbellur) ------ commit 4a7fd196d4a141f2b693d5b49995733f6ad1776f Author: Jeff Darcy <jdarcy> Date: Thu Feb 2 10:22:00 2017 -0500 tests: fix online_brick_count for multiplexing The number of brick processes no longer matches the number of bricks, therefore counting processes doesn't work. Counting *pidfiles* does. Ironically, the fix broke multiplex.t which used this function, so it now uses a different function with the old process-counting behavior. Also had to fix online_brick_count and kill_node in cluster.rc to be consistent with the new reality. Change-Id: I4e81a6633b93227e10604f53e18a0b802c75cbcc BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16527 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Vijay Bellur <vbellur>
REVIEW: https://review.gluster.org/16544 (glusterd: keep snapshot bricks separate from regular ones) posted (#3) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16544 (glusterd: keep snapshot bricks separate from regular ones) posted (#4) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16544 (glusterd: keep snapshot bricks separate from regular ones) posted (#5) for review on master by Jeff Darcy (jdarcy)
REVIEW: https://review.gluster.org/16544 (glusterd: keep snapshot bricks separate from regular ones) posted (#6) for review on master by Jeff Darcy (jdarcy)
COMMIT: https://review.gluster.org/16544 committed in master by Jeff Darcy (jdarcy) ------ commit f1c6ae24361b1bf39794a34ea35a0202a6b49fa6 Author: Jeff Darcy <jdarcy> Date: Fri Feb 3 10:51:21 2017 -0500 glusterd: keep snapshot bricks separate from regular ones The problem here is that a volume's transport options can change, but any snapshots' bricks don't follow along even though they're now incompatible (with respect to multiplexing). This was causing the USS+SSL test to fail. By keeping the snapshot bricks separate (though still potentially multiplexed with other snapshot bricks including those for other volumes) we can ensure that they remain unaffected by changes to their parent volumes. Also fixed various issues with how the test waits (or more precisely didn't) for various events to complete before it continues. Change-Id: Iab4a8a44fac5760373fac36956a3bcc27cf969da BUG: 1385758 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: https://review.gluster.org/16544 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Avra Sengupta <asengupt> Tested-by: Avra Sengupta <asengupt>
*** Bug 1360401 has been marked as a duplicate of this bug. ***
Any future work on this can be done as separate bugs.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/