Bug 1101903 - Backward compatibility during volume import is broken
Summary: Backward compatibility during volume import is broken
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1094716
TreeView+ depends on / blocked
 
Reported: 2014-05-28 06:40 UTC by Kaushal
Modified: 2014-11-11 08:33 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-11 08:33:31 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kaushal 2014-05-28 06:40:35 UTC
Several new members were introduced in the glusterd_volinfo_t type to support snapshots. The volume import, which happens during the volume syncing during glusterd connection establishment, now expects these fields to be present in the dictionary. If not the import is failed.

This causes problems with a mixed cluster with peers of lower versions, which will happen during rolling upgrades.

Comment 1 Anand Avati 2014-06-02 05:44:46 UTC
REVIEW: http://review.gluster.org/7944 (glusterd: Preserve backward compatibility during sync and store) posted (#1) for review on master by Kaushal M (kaushal)

Comment 2 Anand Avati 2014-06-04 09:12:18 UTC
REVIEW: http://review.gluster.org/7944 (glusterd: Preserve backward compatibility during sync and store) posted (#2) for review on master by Kaushal M (kaushal)

Comment 3 Anand Avati 2014-06-06 06:15:17 UTC
COMMIT: http://review.gluster.org/7944 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit f2b42887c1f9780980abe491ed34a13a7b3d4583
Author: Kaushal M <kaushal>
Date:   Wed May 28 16:57:14 2014 +0530

    glusterd: Preserve backward compatibility during sync and store
    
    The glusterd volinfo struct gained several new members to support the
    volume snapshot feature. These members are also being exported/imported
    during volume sync and being stored/restored. This export/import and
    save/restore explicitly required these members to be present, and would
    fail if they were not. This lead to the failure of backward
    compatibility, preventing new peers from correctly interacting with
    older peers (especially during a rolling upgrade).
    
    This patch contains changes needed to preserve the backward
    compatibility in the places specified. The snapshot members of the
    volinfo will now be exported/imported and stored only when the cluster
    op-version is >= 4, ie. all peers in the cluster support snapshot.
    No change is required for the restore code as, the new members will be
    left at the default zero values if corresponding entries are absent in
    the stored volinfo.
    
    Change-Id: I79e4bc5780c991ec305b7b5e7d71c16afb6a4c40
    BUG: 1101903
    Reviewed-on: http://review.gluster.org/7944
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Atin Mukherjee <amukherj>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 4 Anand Avati 2014-06-12 11:07:53 UTC
REVIEW: http://review.gluster.org/8046 (glusterd: More snapshot backward compatability fixes) posted (#1) for review on master by Kaushal M (kaushal)

Comment 5 Anand Avati 2014-06-13 11:41:30 UTC
COMMIT: http://review.gluster.org/8046 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit a6585d9c5e536818e01f05df8e58c18bbe59e231
Author: Kaushal M <kaushal>
Date:   Thu Jun 12 15:02:06 2014 +0530

    glusterd: More snapshot backward compatability fixes
    
    Several volume operations, start, add-brick and replace-brick, expected
    the presence of a bricks mount directory, which is required for the
    snapshot feature. This should only be expected when snapshot is
    supported in the cluster.
    
    Change-Id: I92017bb5e069392352f9800cef1ddc80045fda35
    BUG: 1101903
    Signed-off-by: Kaushal M <kaushal>
    Reviewed-on: http://review.gluster.org/8046
    Reviewed-by: Atin Mukherjee <amukherj>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 6 Niels de Vos 2014-09-22 12:41:18 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 7 Niels de Vos 2014-11-11 08:33:31 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.