Bug 1470938 - Regression: non-disruptive(in-service) upgrade on EC volume fails
Regression: non-disruptive(in-service) upgrade on EC volume fails
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: disperse (Show other bugs)
3.11
Unspecified Unspecified
unspecified Severity urgent
: ---
: ---
Assigned To: bugs@gluster.org
: Regression
Depends On: 1468261
Blocks: 1465289
  Show dependency treegraph
 
Reported: 2017-07-14 01:24 EDT by Sunil Kumar Acharya
Modified: 2017-08-12 09:07 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.11.2
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1468261
Environment:
Last Closed: 2017-08-12 09:07:33 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 1 Worker Ant 2017-07-14 03:36:34 EDT
REVIEW: https://review.gluster.org/17773 (cluster/ec: Non-disruptive upgrade on EC volume fails) posted (#1) for review on release-3.11 by Sunil Kumar Acharya (sheggodu@redhat.com)
Comment 2 Worker Ant 2017-07-19 07:26:25 EDT
COMMIT: https://review.gluster.org/17773 committed in release-3.11 by Shyamsundar Ranganathan (srangana@redhat.com) 
------
commit 425c5acca90bd8c00b94cdcd5082ccc7c1ba078b
Author: Sunil Kumar Acharya <sheggodu@redhat.com>
Date:   Wed Jul 5 16:41:38 2017 +0530

    cluster/ec: Non-disruptive upgrade on EC volume fails
    
    Problem:
    Enabling optimistic changelog on EC volume was not
    handling node down scenarios appropriately resulting
    in volume data inaccessibility.
    
    Solution:
    Update dirty xattr appropriately on good bricks whenever
    nodes are down. This would fix the metadata information
    as part of heal and thus ensures data accessibility.
    
    >BUG: 1468261
    >Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057
    >Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
    >Reviewed-on: https://review.gluster.org/17703
    >Smoke: Gluster Build System <jenkins@build.gluster.org>
    >CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    >Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    
    BUG: 1470938
    Change-Id: I08b0d28df386d9b2b49c3de84b4aac1c729ac057
    Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
    Reviewed-on: https://review.gluster.org/17773
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Ashish Pandey <aspandey@redhat.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
Comment 3 Shyamsundar 2017-08-12 09:07:33 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.2, please open a new bug report.

glusterfs-3.11.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-July/031908.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.