Created attachment 1190594 [details] glustershd.log - VolumeB offline and no PID Description of problem: When using two volumes only the first one gets online and receives a PID after a glusterfs daemon restart or a server reboot. Tested with replicated volumes only. Version-Release number of selected component (if applicable): Debian Jessie, GlusterFS 3.8.2 How reproducible: Every time. Steps to Reproduce: 1. Create replicated volumes VolumeA and VolumeB, whose bricks are on Node1 and Node2. 2. Start both volumes. 3. Restart glusterfs-server.service on Node2 or reboot Node2. Actual results: Volume A is fine but Volume B is offline and does not get a PID on Node2. Expected results: Volumes A and B are online with a PID. Additional info: A "gluster volume start VolumeB force" fixes it. When Volume A is stopped and you retest it by rebooting Node2 again, Volume B works as expected (online and with PID). Logfiles are attached. Status output of node2 after the reboot: Status of volume: VolumeA Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/glusterfs/VolumeA 49155 0 Y 1859 Brick node2:/glusterfs/VolumeA 49153 0 Y 1747 Self-heal Daemon on localhost N/A N/A Y 26188 Self-heal Daemon on node1 N/A N/A Y 21770 Task Status of Volume awstats ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: VolumeB Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/glusterfs/VolumeB 49154 0 Y 1973 Brick node2:/glusterfs/VolumeB N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 26188 Self-heal Daemon on node1 N/A N/A Y 21770 Task Status of Volume VolumeB ------------------------------------------------------------------------------ There are no active volume tasks
Created attachment 1190595 [details] glustershd.log with stopped VolumeA and working VolumeB
Thank you for reporting this issue. It's a regression caused by http://review.gluster.org/14758 which got backported into 3.8.2. We will work on this to fix it in 3.8.3. Keep testing :)
REVIEW: http://review.gluster.org/15186 (glusterd: Fix volume restart issue upon glusterd restart) posted (#1) for review on release-3.8 by Samikshan Bairagya (samikshan)
COMMIT: http://review.gluster.org/15186 committed in release-3.8 by Atin Mukherjee (amukherj) ------ commit 24b499447a69c5e2979e15a99b16d5112be237d0 Author: Samikshan Bairagya <samikshan> Date: Tue Aug 16 16:46:41 2016 +0530 glusterd: Fix volume restart issue upon glusterd restart http://review.gluster.org/#/c/14758/ introduces a check in glusterd_restart_bricks that makes sure that if server quorum is enabled and if the glusterd instance has been restarted, the bricks do not get started. This prevents bricks which have been brought down purposely, say for maintainence, from getting started upon a glusterd restart. However this change introduced regression for a situation that involves multiple volumes. The bricks from the first volume get started, but then for the subsequent volumes the bricks do not get started. This patch fixes that by setting the value of conf->restart_done to _gf_true only after bricks are started correctly for all volumes. > Reviewed-on: http://review.gluster.org/15183 > Smoke: Gluster Build System <jenkins.org> > NetBSD-regression: NetBSD Build System <jenkins.org> > CentOS-regression: Gluster Build System <jenkins.org> > Reviewed-by: Atin Mukherjee <amukherj> (cherry picked from commit dd8d93f24a320805f1f67760b2d3266555acf674) Change-Id: I2c685b43207df2a583ca890ec54dcccf109d22c3 BUG: 1366813 Signed-off-by: Samikshan Bairagya <samikshan> Reviewed-on: http://review.gluster.org/15186 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj>
*** Bug 1368347 has been marked as a duplicate of this bug. ***
(Adding a hopefully friendlier description of the problem) On restarting GlusterD or rebooting a GlusterFS server, only the bricks of the first volume get started. The bricks of the remaining volumes are not started. This is a regression caused by a change in GlusterFS-3.8.2. Because of this regression, GlusterFS volumes will be left in an inoperable state after upgrading to 3.8.2, as upgrading involves restarting GlusterD. Users can forcefully start the remaining volumes, by doing running the `gluster volume start <name> force` command. Also, this breaks automatic start of volumes on rebooting servers, and leaves the volumes inoperable.
I just retested the issue with GlusterFS-3.8.3 and it seems to be solved. After a reboot or a manual daemon restart all volumes are online again. Thanks a lot for the fix! :)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.3, please open a new bug report. glusterfs-3.8.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/announce/2016-August/000059.html [2] https://www.gluster.org/pipermail/gluster-users/