Bug 1313628 - Brick ports get changed after GlusterD restart
Brick ports get changed after GlusterD restart
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Kaushal
Depends On:
Blocks: 1306656 1316391
  Show dependency treegraph
Reported: 2016-03-02 01:11 EST by Kaushal
Modified: 2016-06-16 09:59 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1316391 (view as bug list)
Last Closed: 2016-06-16 09:59:06 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Kaushal 2016-03-02 01:11:30 EST
The following sequence of steps can lead to the brick ports changing, which can break firewall rules and lead to a brick being inaccessible from the client.

1. Stop the volume
2. Stop glusterd on one node.
3a. Start the volume from some other node, or
3b. do a volume set operation
4. Start glusterd on the downed node again.
5. If 3b was done, start volume now.
Result: Brick ports on the downed node change
Comment 1 Vijay Bellur 2016-03-02 05:52:33 EST
REVIEW: http://review.gluster.org/13578 (glusterd: Always copy old brick ports when importing) posted (#1) for review on master by Kaushal M (kaushal@redhat.com)
Comment 2 Vijay Bellur 2016-03-07 02:36:34 EST
REVIEW: http://review.gluster.org/13578 (glusterd: Always copy old brick ports when importing) posted (#2) for review on master by Kaushal M (kaushal@redhat.com)
Comment 3 Vijay Bellur 2016-03-10 02:22:11 EST
COMMIT: http://review.gluster.org/13578 committed in master by Atin Mukherjee (amukherj@redhat.com) 
commit ecf6243bc435a00f3dd2495524cd6e48e2d56f72
Author: Kaushal M <kaushal@redhat.com>
Date:   Wed Mar 2 15:19:30 2016 +0530

    glusterd: Always copy old brick ports when importing
    When an updated volinfo is imported in, the brick ports from the old
    volinfo should be always copied.
    Earlier, this was being done only if the old volinfo was stopped and
    new volinfo was started. This could lead to brick ports chaging when the
    following sequence of steps happened.
    - A volume is stopped
    - GlusterD is stopped on a peer
    - The stopped volume is started
    - The stopped GlusterD is started
    This sequence would lead to bricks on the peer with re-started GlusterD
    to get new ports, which could break firewall rules and could prevent
    client access. This sequence could be hit when enabling management
    encryption in a Gluster trusted storage pool.
    Change-Id: I808ad478038d12ed2b19752511bdd7aa6f663bfc
    BUG: 1313628
    Signed-off-by: Kaushal M <kaushal@redhat.com>
    Reviewed-on: http://review.gluster.org/13578
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    Tested-by: Atin Mukherjee <amukherj@redhat.com>
    Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Comment 4 Niels de Vos 2016-06-16 09:59:06 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.