+++ This bug was initially created as a clone of Bug #1451248 +++ +++ This bug was initially created as a clone of Bug #1450889 +++ Description of problem: ======================== When you reboot a node with brick mux enabled and multi volume setup, I see that many glusterfsd are spawned and hence we lose the brick mux feature/ Version-Release number of selected component (if applicable): ======== 3.8.4-25 How reproducible: ======== always Steps to Reproduce: 1.have a 3 node setup with brick mux enabled, and vols say from v1..v10 with each volume being a 1x3 and one brick per node (all independent LVs) 2.we can see that only one glusterfsd per node exists 3.now reboot node1 4. on successful reboot, following is the status Last login: Mon May 15 15:56:55 2017 from dhcp35-77.lab.eng.blr.redhat.com [root@dhcp35-45 ~]# ps -ef|grep glusterfsd root 4693 1 42 15:56 ? 00:02:07 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id 1.10.70.35.45.rhs-brick1-1 -p /var/lib/glusterd/vols/1/run/10.70.35.45-rhs-brick1-1.pid -S /var/run/gluster/a19832cf9844ad10112aba39eba569a6.socket --brick-name /rhs/brick1/1 -l /var/log/glusterfs/bricks/rhs-brick1-1.log --xlator-option *-posix.glusterd-uuid=e4f737cd-59a2-4392-aa3d-4230f698f128 --brick-port 49152 --xlator-option 1-server.listen-port=49152 root 4701 1 0 15:56 ? 00:00:00 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id 10.10.70.35.45.rhs-brick10-10 -p /var/lib/glusterd/vols/10/run/10.70.35.45-rhs-brick10-10.pid -S /var/run/gluster/fd40f022ab677d36e57793a60cc16166.socket --brick-name /rhs/brick10/10 -l /var/log/glusterfs/bricks/rhs-brick10-10.log --xlator-option *-posix.glusterd-uuid=e4f737cd-59a2-4392-aa3d-4230f698f128 --brick-port 49153 --xlator-option 10-server.listen-port=49153 root 4709 1 0 15:56 ? 00:00:00 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id 2.10.70.35.45.rhs-brick2-2 -p /var/lib/glusterd/vols/2/run/10.70.35.45-rhs-brick2-2.pid -S /var/run/gluster/898f4e556d871cfb1613d6ff121bd5e6.socket --brick-name /rhs/brick2/2 -l /var/log/glusterfs/bricks/rhs-brick2-2.log --xlator-option *-posix.glusterd-uuid=e4f737cd-59a2-4392-aa3d-4230f698f128 --brick-port 49154 --xlator-option 2-server.listen-port=49154 root 4719 1 0 15:56 ? 00:00:00 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id 3.10.70.35.45.rhs-brick3-3 -p /var/lib/glusterd/vols/3/run/10.70.35.45-rhs-brick3-3.pid -S /var/run/gluster/af3354d92921146c0e8d3bebdcbec907.socket --brick-name /rhs/brick3/3 -l /var/log/glusterfs/bricks/rhs-brick3-3.log --xlator-option *-posix.glusterd-uuid=e4f737cd-59a2-4392-aa3d-4230f698f128 --brick-port 49155 --xlator-option 3-server.listen-port=49155 root 4728 1 44 15:56 ? 00:02:13 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id 4.10.70.35.45.rhs-brick4-4 -p /var/lib/glusterd/vols/4/run/10.70.35.45-rhs-brick4-4.pid -S /var/run/gluster/cafb15e7ed1d462ddf513e7cf80ca718.socket --brick-name /rhs/brick4/4 -l /var/log/glusterfs/bricks/rhs-brick4-4.log --xlator-option *-posix.glusterd-uuid=e4f737cd-59a2-4392-aa3d-4230f698f128 --brick-port 49156 --xlator-option 4-server.listen-port=49156 root 4734 1 0 15:56 ? 00:00:00 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id 5.10.70.35.45.rhs-brick5-5 -p /var/lib/glusterd/vols/5/run/10.70.35.45-rhs-brick5-5.pid -S /var/run/gluster/5a92ed518f554fe96a3c3f4a1ecf5cb3.socket --brick-name /rhs/brick5/5 -l /var/log/glusterfs/bricks/rhs-brick5-5.log --xlator-option *-posix.glusterd-uuid=e4f737cd-59a2-4392-aa3d-4230f698f128 --brick-port 49157 --xlator-option 5-server.listen-port=49157 --- Additional comment from Worker Ant on 2017-05-16 07:05:13 EDT --- REVIEW: https://review.gluster.org/17307 (glusterd: Don't spawn new glusterfsds on node reboot with brick-mux) posted (#1) for review on master by Samikshan Bairagya (samikshan) --- Additional comment from Worker Ant on 2017-05-17 16:37:34 EDT --- REVIEW: https://review.gluster.org/17307 (glusterd: Don't spawn new glusterfsds on node reboot with brick-mux) posted (#2) for review on master by Samikshan Bairagya (samikshan) --- Additional comment from Worker Ant on 2017-05-18 07:56:46 EDT --- REVIEW: https://review.gluster.org/17307 (glusterd: Don't spawn new glusterfsds on node reboot with brick-mux) posted (#3) for review on master by Samikshan Bairagya (samikshan) --- Additional comment from Worker Ant on 2017-05-18 12:45:32 EDT --- COMMIT: https://review.gluster.org/17307 committed in master by Jeff Darcy (jeff.us) ------ commit 13e7b3b354a252ad4065f7b2f0f805c40a3c5d18 Author: Samikshan Bairagya <samikshan> Date: Tue May 16 15:07:21 2017 +0530 glusterd: Don't spawn new glusterfsds on node reboot with brick-mux With brick multiplexing enabled, upon a node reboot new bricks were not being attached to the first spawned brick process even though there wasn't any compatibility issues. The reason for this is that upon glusterd restart after a node reboot, since brick services aren't running, glusterd starts the bricks in a "no-wait" mode. So after a brick process is spawned for the first brick, there isn't enough time for the corresponding pid file to get populated with a value before the compatibilty check is made for the next brick. This commit solves this by iteratively waiting for the pidfile to be populated in the brick compatibility comparison stage before checking if the brick process is alive. Change-Id: Ibd1f8e54c63e4bb04162143c9d70f09918a44aa4 BUG: 1451248 Signed-off-by: Samikshan Bairagya <samikshan> Reviewed-on: https://review.gluster.org/17307 Reviewed-by: Atin Mukherjee <amukherj> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org>
REVIEW: https://review.gluster.org/17351 (glusterd: Don't spawn new glusterfsds on node reboot with brick-mux) posted (#1) for review on release-3.11 by Samikshan Bairagya (samikshan)
COMMIT: https://review.gluster.org/17351 committed in release-3.11 by Shyamsundar Ranganathan (srangana) ------ commit 671dfcd82f6a7c56fbcbfde33cba22c0b585a046 Author: Samikshan Bairagya <samikshan> Date: Tue May 16 15:07:21 2017 +0530 glusterd: Don't spawn new glusterfsds on node reboot with brick-mux With brick multiplexing enabled, upon a node reboot new bricks were not being attached to the first spawned brick process even though there wasn't any compatibility issues. The reason for this is that upon glusterd restart after a node reboot, since brick services aren't running, glusterd starts the bricks in a "no-wait" mode. So after a brick process is spawned for the first brick, there isn't enough time for the corresponding pid file to get populated with a value before the compatibilty check is made for the next brick. This commit solves this by iteratively waiting for the pidfile to be populated in the brick compatibility comparison stage before checking if the brick process is alive. > Reviewed-on: https://review.gluster.org/17307 > Reviewed-by: Atin Mukherjee <amukherj> > Smoke: Gluster Build System <jenkins.org> > NetBSD-regression: NetBSD Build System <jenkins.org> > CentOS-regression: Gluster Build System <jenkins.org> (cherry picked from commit 13e7b3b354a252ad4065f7b2f0f805c40a3c5d18) Change-Id: Ibd1f8e54c63e4bb04162143c9d70f09918a44aa4 BUG: 1453086 Signed-off-by: Samikshan Bairagya <samikshan> Reviewed-on: https://review.gluster.org/17351 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj>
REVIEW: https://review.gluster.org/17383 (glusterd: Eliminate race in brick compatibility checking stage) posted (#1) for review on release-3.11 by Samikshan Bairagya (samikshan)
COMMIT: https://review.gluster.org/17383 committed in release-3.11 by Shyamsundar Ranganathan (srangana) ------ commit 71752f1bde9e8ad7a71670c885279f91ff951a1b Author: Samikshan Bairagya <samikshan> Date: Tue May 23 19:32:24 2017 +0530 glusterd: Eliminate race in brick compatibility checking stage In https://review.gluster.org/17307/, while looking for compatible bricks for multiplexing, it is checked if the brick pidfile exists before checking if the corresponding brick process is running. However checking if the brick process is running just after checking if the pidfile exists isn't enough since there might be race conditions where the pidfile has been created but hasn't been updated with a pid value yet. This commit solves that by making sure that we wait iteratively till the pid value is updated as well. > Reviewed-on: https://review.gluster.org/17375 > Smoke: Gluster Build System <jenkins.org> > Reviewed-by: Atin Mukherjee <amukherj> > NetBSD-regression: NetBSD Build System <jenkins.org> > CentOS-regression: Gluster Build System <jenkins.org> (cherry picked from commit a8624b8b13a1f4222e4d3e33fa5836d7b45369bc) Change-Id: Ib7a158f95566486f7c1f84b6357c9b89e4c797ae BUG: 1453086 Signed-off-by: Samikshan Bairagya <samikshan> Reviewed-on: https://review.gluster.org/17383 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/