Red Hat Bugzilla – Bug 1276587
[GlusterD]: After updating one of rhgs 2.1.6 node to 3.1.2 in two node cluster, volume status is failing
Last modified: 2016-03-01 00:49:15 EST
Description of problem:
After updating one of two node cluster from RHGS 2.1.6 to 3.1.2, volume status on both the nodes (updated & not updated ) failing with error message:
[root@ ~]# gluster volume status
Staging failed on host_name. Please check log file for details. //host_name = other node IP
GlusterD Log error:
[2015-10-30 08:32:13.663409] E [MSGID: 106524] [glusterd-op-sm.c:1808:glusterd_op_stage_stats_volume] 0-glusterd: Volume name get failed
[2015-10-30 08:32:13.663521] E [MSGID: 106301] [glusterd-op-sm.c:5214:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Profile', Status : -2
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Have two node cluster with rhgs 2.1.6
2.Created Distributed and replica volumes using both the nodes.
3.Update one of the node to 3.1.2 (glusterfs-3.7.5-5 )
4. Start the glusterd and check the volume status
Volume status failing with "Staging failed on host_name. Please check log file for details." //host_name =IP of other node in the cluster
Volume status should work and show all the volume details in the cluster
Additional info for Debug:
After updating both the nodes, volume status worked good.
upstream patch for this bug available : http://review.gluster.org/#/c/12473/
Currently new feature tiering have GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE
enum in the middle of the glusterd_op_ enum array. So when cluster have
two node and one of the node upgrade from lower version to higher
version and execute any command (which glusterd operation enum code value
higher then GD_OP_DETACH_TIER and GD_OP_TIER_MIGRATE enum value) then
first node (which have upgraded) will send respective executing command
enum value + GD_OP_TIER_MIGRATE + GD_OP_DETACH_TIER value. When this
enum value goes to 2nd node of the cluster (which have not upgraded yet
and dont't have GD_OP_DETACH_TIER, GD_OP_TIER_MIGRATE enum) then 2nd
node will pick up wrong command from the array based on the first node
enum value and command executiong will fail.
Fix is to put every new feature glusterd operation enum code to last of
*** Bug 1277791 has been marked as a duplicate of this bug. ***
downstream patch for this bug: https://code.engineering.redhat.com/gerrit/#/c/60757/
Verified this bug with the build **glusterfs-3.7.5-6***.
Fix is working good and no more seeing the issue described.
Moving the bug to verified state.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.