Description of problem: Client will not get notified if add-brick/remove-brick is done after node that is used while mount goes down Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. Create a volume of 2 nodes 2. mount with first node IP 3. Kill the first node 4. Add a new brick to the volume 5. Client will not be notified about the changes done for volume Actual results: As a result you cannot store the files on to New brick added Expected results: Client to switch to next possible remote host and communicate with glusterd
REVIEW: http://review.gluster.org/13002 (glusterd-client: switch volfile server incase existing connection breaks) posted (#1) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/13002 (glusterd-client: switch volfile server incase existing connection breaks) posted (#2) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/13002 (glusterd-client: switch volfile server incase existing connection breaks) posted (#3) for review on master by Prasanna Kumar Kalever (pkalever)
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
COMMIT: http://review.gluster.org/13002 committed in master by Jeff Darcy (jdarcy) ------ commit 05bc8bfd2a11d280fe0aaac6c7ae86ea5ff08164 Author: Prasanna Kumar Kalever <prasanna.kalever> Date: Thu Mar 17 13:50:31 2016 +0530 glusterd-client: switch volfile server incase existing connection breaks Problem: Currently, say we have 10 Node gluster volume, and mounted it using Node 1 (N1) as volfile server and the rest as backup volfile servers $ mount -t glusterfs -obackup-volfile-servers=<N2>:<N3>:...:<N10> <N1>:/vol /mnt if N1 goes down we still be able to access the same mount point, but the problem is that if we add or remove bricks to the volume whoes volfile server is down in our case N1, that info will not be passed to client, because connection between glusterfs and glusterd (of N1) will be disconnected due to which we cannot store files to the newly added bricks until N1 comes back Solution: If N1 goes down iterate through the nodes specified in backup-volfile-servers list and try to establish the connection between glusterfs and glsuterd, hence we don't really have to wait until N1 comes back to store files in newly added bricks that are successfully added when N1 was down Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd BUG: 1289916 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever> Reviewed-on: http://review.gluster.org/13002 Tested-by: Prasanna Kumar Kalever <pkalever> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Poornima G <pgurusid> Reviewed-by: Jeff Darcy <jdarcy>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user