Bug 1289916 - Client will not get notified about changes to volume if node used while mounting goes down
Client will not get notified about changes to volume if node used while moun...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: protocol (Show other bugs)
mainline
All All
medium Severity medium
: ---
: ---
Assigned To: Prasanna Kumar Kalever
: Triaged
Depends On:
Blocks: 1351949
  Show dependency treegraph
 
Reported: 2015-12-09 05:49 EST by Prasanna Kumar Kalever
Modified: 2016-07-01 04:43 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1351949 (view as bug list)
Environment:
Last Closed: 2016-06-16 09:49:41 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Prasanna Kumar Kalever 2015-12-09 05:49:15 EST
Description of problem:
Client will not get notified if add-brick/remove-brick is done after node that is used while mount goes down

Version-Release number of selected component (if applicable):
mainline

How reproducible:

Steps to Reproduce:
1. Create a volume of 2 nodes
2. mount with first node IP
3. Kill the first node
4. Add a new brick to the volume
5. Client will not be notified about the changes done for volume


Actual results:
As a result you cannot store the files on to New brick added

Expected results:
Client to switch to next possible remote host and communicate with glusterd
Comment 1 Vijay Bellur 2015-12-18 07:39:01 EST
REVIEW: http://review.gluster.org/13002 (glusterd-client: switch volfile server incase existing connection breaks) posted (#1) for review on master by Prasanna Kumar Kalever (pkalever@redhat.com)
Comment 2 Vijay Bellur 2016-03-17 04:40:14 EDT
REVIEW: http://review.gluster.org/13002 (glusterd-client: switch volfile server incase existing connection breaks) posted (#2) for review on master by Prasanna Kumar Kalever (pkalever@redhat.com)
Comment 3 Vijay Bellur 2016-03-17 09:23:29 EDT
REVIEW: http://review.gluster.org/13002 (glusterd-client: switch volfile server incase existing connection breaks) posted (#3) for review on master by Prasanna Kumar Kalever (pkalever@redhat.com)
Comment 4 Mike McCune 2016-03-28 19:22:56 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 5 Vijay Bellur 2016-04-12 08:14:27 EDT
COMMIT: http://review.gluster.org/13002 committed in master by Jeff Darcy (jdarcy@redhat.com) 
------
commit 05bc8bfd2a11d280fe0aaac6c7ae86ea5ff08164
Author: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Date:   Thu Mar 17 13:50:31 2016 +0530

    glusterd-client: switch volfile server incase existing connection breaks
    
    Problem:
    Currently, say we have 10 Node gluster volume, and mounted it using
    Node 1 (N1) as volfile server and the rest as backup volfile servers
    
    $ mount -t glusterfs -obackup-volfile-servers=<N2>:<N3>:...:<N10> <N1>:/vol /mnt
    
    if N1 goes down we still be able to access the same mount point,
    but the problem is that if we add or remove bricks to the volume
    whoes volfile server is down in our case N1, that info will not be
    passed to client, because connection between glusterfs and glusterd (of N1)
    will be disconnected due to which we cannot store files to the newly
    added bricks until N1 comes back
    
    Solution:
    If N1 goes down iterate through the nodes specified in
    backup-volfile-servers list and try to establish the connection between
    glusterfs and glsuterd, hence we don't really have to wait
    until N1 comes back to store files in newly added bricks that are
    successfully added when N1 was down
    
    Change-Id: I653c9f081a84667630608091bc243ffc3859d5cd
    BUG: 1289916
    Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
    Reviewed-on: http://review.gluster.org/13002
    Tested-by: Prasanna Kumar Kalever <pkalever@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Poornima G <pgurusid@redhat.com>
    Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
Comment 6 Niels de Vos 2016-06-16 09:49:41 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.