Bug 1367478 - Second gluster volume is offline after daemon restart or server reboot
Summary: Second gluster volume is offline after daemon restart or server reboot
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Samikshan Bairagya
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1366813
TreeView+ depends on / blocked
 
Reported: 2016-08-16 13:56 UTC by Samikshan Bairagya
Modified: 2017-03-27 18:19 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.9.0
Clone Of: 1366813
Environment:
Last Closed: 2017-03-27 18:19:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Samikshan Bairagya 2016-08-16 13:56:33 UTC
+++ This bug was initially created as a clone of Bug #1366813 +++

Description of problem:

When using two volumes only the first one gets online and receives a PID after a glusterfs daemon restart or a server reboot. Tested with replicated volumes only.

Version-Release number of selected component (if applicable): 

Debian Jessie, GlusterFS 3.8.2

How reproducible:

Every time.

Steps to Reproduce:

1. Create replicated volumes VolumeA and VolumeB, whose bricks are on Node1 and Node2.
2. Start both volumes.
3. Restart glusterfs-server.service on Node2 or reboot Node2.

Actual results:

Volume A is fine but Volume B is offline and does not get a PID on Node2.

Expected results:

Volumes A and B are online with a PID.

Additional info:

A "gluster volume start VolumeB force" fixes it.

When Volume A is stopped and you retest it by rebooting Node2 again, Volume B works as expected (online and with PID).

Logfiles are attached.


Status output of node2 after the reboot:

Status of volume: VolumeA
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node1:/glusterfs/VolumeA              49155     0          Y       1859 
Brick node2:/glusterfs/VolumeA              49153     0          Y       1747 
Self-heal Daemon on localhost               N/A       N/A        Y       26188
Self-heal Daemon on node1                   N/A       N/A        Y       21770
 
Task Status of Volume awstats
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: VolumeB
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node1:/glusterfs/VolumeB              49154     0          Y       1973 
Brick node2:/glusterfs/VolumeB              N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       26188
Self-heal Daemon on node1                   N/A       N/A        Y       21770
 
Task Status of Volume VolumeB
------------------------------------------------------------------------------
There are no active volume tasks

--- Additional comment from Daniel on 2016-08-12 20:52 EDT ---



--- Additional comment from Atin Mukherjee on 2016-08-16 00:35:23 EDT ---

Thank you for reporting this issue. It's a regression caused by http://review.gluster.org/14758 which got backported into 3.8.2. We will work on this to fix it in 3.8.3. Keep testing :)

Comment 1 Vijay Bellur 2016-08-17 03:57:09 UTC
REVIEW: http://review.gluster.org/15183 (glusterd: Fix volume restart issue upon glusterd restart) posted (#1) for review on master by Samikshan Bairagya (samikshan)

Comment 2 Vijay Bellur 2016-08-17 09:53:23 UTC
COMMIT: http://review.gluster.org/15183 committed in master by Atin Mukherjee (amukherj) 
------
commit dd8d93f24a320805f1f67760b2d3266555acf674
Author: Samikshan Bairagya <samikshan>
Date:   Tue Aug 16 16:46:41 2016 +0530

    glusterd: Fix volume restart issue upon glusterd restart
    
    http://review.gluster.org/#/c/14758/ introduces a check in
    glusterd_restart_bricks that makes sure that if server quorum is
    enabled and if the glusterd instance has been restarted, the bricks
    do not get started. This prevents bricks which have been brought
    down purposely, say for maintainence, from getting started
    upon a glusterd restart. However this change introduced regression
    for a situation that involves multiple volumes. The bricks from
    the first volume get started, but then for the subsequent volumes
    the bricks do not get started. This patch fixes that by setting
    the value of conf->restart_done to _gf_true only after bricks are
    started correctly for all volumes.
    
    Change-Id: I2c685b43207df2a583ca890ec54dcccf109d22c3
    BUG: 1367478
    Signed-off-by: Samikshan Bairagya <samikshan>
    Reviewed-on: http://review.gluster.org/15183
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 3 Shyamsundar 2017-03-27 18:19:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report.

glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.