+++ This bug was initially created as a clone of Bug #1223213 +++ Description of problem: When a node in a multi node cluster is upgraded from 3.5 to 3.7 gluster volume status fails with locking failed error message Version-Release number of selected component (if applicable): Mainline How reproducible: Always Steps to Reproduce: 1. Create a multi node cluster running with 3.5 version 2. Upgrade any node to 3.7 3. Execute gluster volume status Actual results: gluster volume status fails with locking failed error message Expected results: gluster volume status should be successful Additional info:
REVIEW: http://review.gluster.org/10959 (glusterd : allocate peerid to store in frame->cookie) posted (#3) for review on release-3.7 by Atin Mukherjee (amukherj)
COMMIT: http://review.gluster.org/10959 committed in release-3.7 by Krishnan Parthasarathi (kparthas) ------ commit 823da7104b4725469a597920d0171a21ff9ff707 Author: Atin Mukherjee <amukherj> Date: Wed May 20 14:33:41 2015 +0530 glusterd : allocate peerid to store in frame->cookie Backport of http://review.gluster.org/10842 commit a1de3b05 was using peerid from the stack and storing it in the frame->cookie and in the subsequent callback it was referred. The existance of this variable is not guranteed in the cbk since its not dynamically allocated. Fix is to dynmacially manage peerid in the frame cookie. This patch also fixes one problem in gd_sync_task_begin () where unlock is not triggered if the cluster is running with lesser than 3.6 op-version resulting into commands failing with another transaction is in progress. Change-Id: I0d22cf663df53ef3769585703944577461061312 BUG: 1223215 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/10842 Tested-by: NetBSD Build System Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kaushal M <kaushal> (cherry picked from commit 37f365843bed87728048da1f56de22290f5cb70f) Reviewed-on: http://review.gluster.org/10959 Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.1, please reopen this bug report. glusterfs-3.7.1 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/1 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user