Description of problem: If a tiered volume is created in a storage pool containing more than one node, then cksum in nodes will be different. Version-Release number of selected component (if applicable): mainline How reproducible: always Steps to Reproduce: 1.create a storage pool with two or more nodes 2.create a tiered volume 3.restart the glusterd 4.check peer status Actual results: peer will not be in a state "Peer in cluster" Expected results: peer should be in state "Peer in cluster" Additional info:
REVIEW: http://review.gluster.org/10406 (glusterd/tiering : cksum mismatch for tired volume) posted (#1) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/10406 (glusterd/tiering : cksum mismatch for tired volume) posted (#2) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/10406 (glusterd/tiering : cksum mismatch for tired volume) posted (#3) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/10406 (glusterd/tiering : cksum mismatch for tired volume) posted (#4) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/10406 (glusterd/tiering : cksum mismatch for tiered volume) posted (#6) for review on master by mohammed rafi kc (rkavunga)
Fix is not merged in master, moving back to post state
REVIEW: http://review.gluster.org/10406 (glusterd/tiering : cksum mismatch for tiered volume) posted (#7) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/10406 (glusterd/tiering : cksum mismatch for tiered volume) posted (#8) for review on master by Vijay Bellur (vbellur)
REVIEW: http://review.gluster.org/10406 (glusterd/tiering : cksum mismatch for tiered volume) posted (#9) for review on master by mohammed rafi kc (rkavunga)
This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user