Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1412069 - No rollback of renames on succeeded subvols during failure
Summary: No rollback of renames on succeeded subvols during failure
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1550896
TreeView+ depends on / blocked
 
Reported: 2017-01-11 07:00 UTC by Raghavendra G
Modified: 2018-03-02 08:27 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.10.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1550896 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:43:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Raghavendra G 2017-01-11 07:00:04 UTC
Description of problem:
As with dht, dirs are present on all subvolumes, renaming them is a compound operation and thus a partial success + partial failure scenario is possible, resulting in an inconsistent state. For purposes of reproduction, such a scenario can easily be produced by stopping the volume, edit the volfile of a certain subvolume to get at an "option read-only on" setting, and then restart the volume. Thus those operations that are to make change on the affected subvolume will fail with EROFS. 

Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2017-01-11 07:01:30 UTC
REVIEW: http://review.gluster.org/15739 (feature/dht: undo partially successful dir rename) posted (#7) for review on master by Raghavendra G (rgowdapp@redhat.com)

Comment 2 Worker Ant 2017-01-11 15:40:29 UTC
COMMIT: http://review.gluster.org/15739 committed in master by Raghavendra G (rgowdapp@redhat.com) 
------
commit bb438d849a4a3941c1a9b525213f695f0a2c961b
Author: Csaba Henk <csaba@redhat.com>
Date:   Thu Oct 27 07:30:48 2016 +0200

    feature/dht: undo partially successful dir rename
    
    As with dht, dirs are present on all subvolumes,
    renaming them is a compound operation and thus a
    partial success + partial failure scenario is
    possible, resulting in an inconsistent state.
    
    For purposes of reproduction, such a scenario can
    easily be produced by stopping the volume, edit the
    volfile of a certain subvolume to get at an
    "option read-only on" setting, and then restart
    the volume. Thus those operations that are to make change
    on the affected subvolume will fail with EROFS.
    
    To handle such scenarios, we introduce an in-memory cache
    where we record the return values obtained from the
    subvolumes. At the final stage of the dir rename operation
    we check if it's a partial success/fail situation. If yes,
    then we perform a reverse rename op on those subvolumes
    where the operation succeeded.
    
    Change-Id: I3d05f74f53932cb984a918d252a7309c1009a51d
    BUG: 1412069
    Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
    Reviewed-on: http://review.gluster.org/15739
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: N Balachandran <nbalacha@redhat.com>

Comment 3 Shyamsundar 2017-03-06 17:43:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.