+++ This bug was initially created as a clone of Bug #1238135 +++ Description of problem: As of now all the daemon services are initialized at glusterD init path. Since socket file path of per node daemon demands the uuid of the node, MY_UUID macro is invoked as part of the initialization. This flow breaks the usecases where a gluster image is built following a template (it could be Dockerfile, vagrantfile or any kind of virtualization environment). This means bringing instances of this image would have same UUIDs for the node resulting in peer probe failure. Version-Release number of selected component (if applicable): Mainline How reproducible: Always Steps to Reproduce: 1. Build a docker image using https://registry.hub.docker.com/u/gluster/gluster-fedora/dockerfile/ 2. Bring up multiple containers using the above image 3. Check the UUIDs on all these containers by cat /var/lib/glusterd/glusterd.info | grep UUID Actual results: UUIDs are same Expected results: UUIDs should be unique. Additional info: --- Additional comment from Anand Avati on 2015-07-01 05:41:44 EDT --- REVIEW: http://review.gluster.org/11488 (glusterd: initialize the daemon services on demand) posted (#2) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Anand Avati on 2015-07-01 06:03:48 EDT --- REVIEW: http://review.gluster.org/11488 (glusterd: initialize the daemon services on demand) posted (#3) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Anand Avati on 2015-07-01 08:05:32 EDT --- REVIEW: http://review.gluster.org/11488 (glusterd: initialize the daemon services on demand) posted (#4) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Anand Avati on 2015-07-02 06:39:48 EDT --- REVIEW: http://review.gluster.org/11488 (glusterd: initialize the daemon services on demand) posted (#5) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Anand Avati on 2015-07-06 06:52:15 EDT --- REVIEW: http://review.gluster.org/11488 (glusterd: initialize the daemon services on demand) posted (#6) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Anand Avati on 2015-07-07 23:57:16 EDT --- REVIEW: http://review.gluster.org/11488 (glusterd: initialize the daemon services on demand) posted (#7) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Anand Avati on 2015-07-13 00:31:47 EDT --- REVIEW: http://review.gluster.org/11488 (glusterd: initialize the daemon services on demand) posted (#8) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Anand Avati on 2015-07-23 00:10:01 EDT --- REVIEW: http://review.gluster.org/11488 (glusterd: initialize the daemon services on demand) posted (#9) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/11766 (glusterd: initialize the daemon services on demand) posted (#1) for review on release-3.7 by Atin Mukherjee (amukherj)
COMMIT: http://review.gluster.org/11766 committed in release-3.7 by Kaushal M (kaushal) ------ commit c5a19652c80162e670d29a7bd8c910d0acdfacb9 Author: Atin Mukherjee <amukherj> Date: Wed Jul 1 14:47:48 2015 +0530 glusterd: initialize the daemon services on demand Backport of http://review.gluster.org/#/c/11488/ As of now all the daemon services are initialized at glusterD init path. Since socket file path of per node daemon demands the uuid of the node, MY_UUID macro is invoked as part of the initialization. The above flow breaks the usecases where a gluster image is built following a template could be Dockerfile, Vagrantfile or any kind of virtualization environment. This means bringing instances of this image would have same UUIDs for the node resulting in peer probe failure. Solution is to lazily initialize the services on demand. Change-Id: If7caa533026c83e98c7c7678bded67085d0bbc1e BUG: 1247012 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/11488 Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Gaurav Kumar Garg <ggarg> Reviewed-by: Kaushal M <kaushal> Reviewed-on: http://review.gluster.org/11766
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user