Bug 1004685 - Volume cksums differ after one of the nodes in a 2.0 cluster is upgraded to 2.1.
Volume cksums differ after one of the nodes in a 2.0 cluster is upgraded to 2.1.
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
Unspecified Unspecified
urgent Severity urgent
: ---
: ---
Assigned To: Krutika Dhananjay
Sudhir D
Depends On:
  Show dependency treegraph
Reported: 2013-09-05 04:41 EDT by Krutika Dhananjay
Modified: 2015-08-10 03:44 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-v3.4.0.33rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-08-10 03:44:21 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Krutika Dhananjay 2013-09-05 04:41:42 EDT
Description of problem:
After upgrading one of the nodes of a 2.0-all cluster to 2.1, the peer gets rejected with volume checksum mismatch.

Version-Release number of selected component (if applicable):
RHS-2.1 built from source.

How reproducible:

Steps to Reproduce:

1. Create a cluster of 2 nodes, both operating in version 2.0.
2. Create a volume with bricks on both the nodes.
3. Start the volume.
4. Mount the volume from a 2.0 client.
5. Upgrade one of the servers to 2.1.
6. Restart glusterd on this 2.1 node.
7. The node gets rejected with volume cksum mismatch.

Actual results:
Volume cksum computed on the two nodes differ.

Expected results:
Volume cksum must be the same on both the nodes.

Additional info:
Comment 4 Krutika Dhananjay 2013-09-19 07:25:49 EDT
The fix is available in glusterfs-v3.4.0.33rhs, hence moving the state of the bug to ON_QA.
Comment 5 Sachidananda Urs 2013-12-16 00:43:39 EST
This is a sanity only bug.
Ref: https://mojo.redhat.com/docs/DOC-20570 - Upgrade tests.

Note You need to log in before you can comment on or make changes to this bug.