Bug 1248298 - [upgrade] After upgrade from 3.5 to 3.6 onwards version, bumping up op-version failed
Summary: [upgrade] After upgrade from 3.5 to 3.6 onwards version, bumping up op-versio...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard: glusterd
Depends On:
Blocks: 1249921 1250836
TreeView+ depends on / blocked
 
Reported: 2015-07-30 04:17 UTC by Atin Mukherjee
Modified: 2016-06-16 13:27 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1247947
: 1249921 1250836 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:27:16 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Atin Mukherjee 2015-07-30 04:17:28 UTC
+++ This bug was initially created as a clone of Bug #1247947 +++

Description of problem:
------------------------
Upgraded 3.5 nodes to 3.6/3.7.
After upgrade, bumping up op-version to 30703 failed

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
mainline

How reproducible:
------------------
Always

Steps to Reproduce:
--------------------
1. Upgrade 3.5 Nodes to 3.6/3.7
2. After upgrade bump up op-version to 30703

Actual results:
---------------
Bumping up op-version failed

Expected results:
-----------------
Bumping up op-version should succeed

Additional info:
----------------
[2015-07-29 11:50:31.860731]  : volume set all cluster.op-version 30703 : FAILED :

[root@ ~]# gluster volume get drvol op-version
Option Value
------ -----
cluster.op-version                      30703

Following are the logs from 2 nodes.

NODE-1
----------
[2015-07-29 11:50:31.860355] E [MSGID: 106116] [glusterd-mgmt.c:134:gd_mgmt_v3_collate_errors] 0-management: Unlocking failed on dhcp37-126.lab.eng.blr.redhat.com. Please check log file for details.
[2015-07-29 11:50:31.860493] E [MSGID: 106152] [glusterd-syncop.c:1562:gd_unlock_op_phase] 0-management: Failed to unlock on some peer(s)
[2015-07-29 11:50:31.860587] E [MSGID: 106025] [glusterd-locks.c:641:glusterd_mgmt_v3_unlock] 0-management: name is null. [Invalid argument]
[2015-07-29 11:50:31.860666] E [MSGID: 106118] [glusterd-syncop.c:1588:gd_unlock_op_phase] 0-management: Unable to release lock for (null)
[2015-07-29 11:50:31.875251] I [run.c:190:runner_log] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fdcd220c5e0] (--> /usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7fdcd225ff95] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc)[0x7fdcc6cac10c] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(+0xed422)[0x7fdcc6cac422] (--> /lib64/libpthread.so.0(+0x3429c07a51)[0x7fdcd12f3a51] ))))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=all -o cluster.op-version=30703 --gd-workdir=/var/lib/glusterd
[2015-07-29 11:50:31.893561] I [run.c:190:runner_log] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fdcd220c5e0] (--> /usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7fdcd225ff95] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc)[0x7fdcc6cac10c] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(+0xed422)[0x7fdcc6cac422] (--> /lib64/libpthread.so.0(+0x3429c07a51)[0x7fdcd12f3a51] ))))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=all -o cluster.op-version=30703 --gd-workdir=/var/lib/glusterd


NODE-2
-----------

[2015-07-29 11:50:31.622533] E [MSGID: 106118] [glusterd-op-sm.c:3619:glusterd_op_ac_unlock] 0-management: Unable to release lock for all
[2015-07-29 11:50:31.622788] E [MSGID: 106376] [glusterd-op-sm.c:7286:glusterd_op_sm] 0-management: handler returned: -1

--- Additional comment from SATHEESARAN on 2015-07-29 07:32:24 EDT ---

The volume set fails, but the op-version actually got bumped up.
There are no problems functionally

Comment 1 Anand Avati 2015-07-30 04:20:01 UTC
REVIEW: http://review.gluster.org/11798 (glusterd: fix op-version bump up flow) posted (#1) for review on master by Atin Mukherjee (amukherj)

Comment 2 Anand Avati 2015-08-04 04:26:01 UTC
COMMIT: http://review.gluster.org/11798 committed in master by Kaushal M (kaushal) 
------
commit b467b97e4c4546b7f870a3ac624d56c62bfa5cf9
Author: Atin Mukherjee <amukherj>
Date:   Thu Jul 30 09:40:24 2015 +0530

    glusterd: fix op-version bump up flow
    
    If a cluster is upgraded from 3.5 to latest version, gluster volume set all
    cluster.op-version <VERSION> will throw an error message back to the user saying
    unlocking failed. This is because of trying to release a volume wise lock in
    unlock phase as the lock was taken cluster wide. The problem surfaced because
    the op-version is updated in commit phase and unlocking works in the v3
    framework where it should have used cluster unlock.
    
    Fix is to decide which lock/unlock is to be followed before invoking lock phase
    
    Change-Id: Iefb271a058431fe336a493c24d240ed833f279c5
    BUG: 1248298
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/11798
    Reviewed-by: Avra Sengupta <asengupt>
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Anand Nekkunti <anekkunt>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Kaushal M <kaushal>

Comment 3 Nagaprasad Sathyanarayana 2015-10-25 14:43:27 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 4 Niels de Vos 2016-06-16 13:27:16 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.