Bug 1247947 - [upgrade] After in-service software upgrade from RHGS 2.1 to RHGS 3.1, bumping up op-version failed
Summary: [upgrade] After in-service software upgrade from RHGS 2.1 to RHGS 3.1, bumpin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.2
Assignee: Satish Mohan
QA Contact: Byreddy
URL:
Whiteboard: glusterd
Depends On:
Blocks: 1260783
TreeView+ depends on / blocked
 
Reported: 2015-07-29 10:15 UTC by SATHEESARAN
Modified: 2016-03-10 07:20 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.5-0.3
Doc Type: Bug Fix
Doc Text:
Previously, bump-up op-version command failed after upgrading to Red Hat Gluster Storage 3.1. With this release, this issue is fixed.
Clone Of:
: 1248298 (view as bug list)
Environment:
Last Closed: 2016-03-01 05:33:08 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description SATHEESARAN 2015-07-29 10:15:28 UTC
Description of problem:
------------------------
Upgraded RHGS 2.1 nodes to RHGS 3.1, using In-service Software Upgrade.
After upgrade, bumping up op-version to 30703 failed

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHGS 2.1 ( glusterfs-3.4.0.72-1.el6rhs )
RHGS 3.1 ( glusterfs-3.7.1-11.el6rhs )

How reproducible:
------------------
Always

Steps to Reproduce:
--------------------
1. Upgrade RHGS 2.1 Nodes to RHGS 3.1, in In-service Software Upgrade
2. After upgrade bump up op-version to 30703

Actual results:
---------------
Bumping up op-version failed

Expected results:
-----------------
Bumping up op-version should succeed

Additional info:
----------------
[2015-07-29 11:50:31.860731]  : volume set all cluster.op-version 30703 : FAILED :

[root@ ~]# gluster volume get drvol op-version
Option Value
------ -----
cluster.op-version                      30703

Following are the logs from 2 nodes.

NODE-1
----------
[2015-07-29 11:50:31.860355] E [MSGID: 106116] [glusterd-mgmt.c:134:gd_mgmt_v3_collate_errors] 0-management: Unlocking failed on dhcp37-126.lab.eng.blr.redhat.com. Please check log file for details.
[2015-07-29 11:50:31.860493] E [MSGID: 106152] [glusterd-syncop.c:1562:gd_unlock_op_phase] 0-management: Failed to unlock on some peer(s)
[2015-07-29 11:50:31.860587] E [MSGID: 106025] [glusterd-locks.c:641:glusterd_mgmt_v3_unlock] 0-management: name is null. [Invalid argument]
[2015-07-29 11:50:31.860666] E [MSGID: 106118] [glusterd-syncop.c:1588:gd_unlock_op_phase] 0-management: Unable to release lock for (null)
[2015-07-29 11:50:31.875251] I [run.c:190:runner_log] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fdcd220c5e0] (--> /usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7fdcd225ff95] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc)[0x7fdcc6cac10c] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(+0xed422)[0x7fdcc6cac422] (--> /lib64/libpthread.so.0(+0x3429c07a51)[0x7fdcd12f3a51] ))))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=all -o cluster.op-version=30703 --gd-workdir=/var/lib/glusterd
[2015-07-29 11:50:31.893561] I [run.c:190:runner_log] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fdcd220c5e0] (--> /usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7fdcd225ff95] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc)[0x7fdcc6cac10c] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(+0xed422)[0x7fdcc6cac422] (--> /lib64/libpthread.so.0(+0x3429c07a51)[0x7fdcd12f3a51] ))))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=all -o cluster.op-version=30703 --gd-workdir=/var/lib/glusterd


NODE-2
-----------

[2015-07-29 11:50:31.622533] E [MSGID: 106118] [glusterd-op-sm.c:3619:glusterd_op_ac_unlock] 0-management: Unable to release lock for all
[2015-07-29 11:50:31.622788] E [MSGID: 106376] [glusterd-op-sm.c:7286:glusterd_op_sm] 0-management: handler returned: -1

Comment 1 SATHEESARAN 2015-07-29 11:32:24 UTC
The volume set fails, but the op-version actually got bumped up.
There are no problems functionally

Comment 2 Anand Nekkunti 2015-10-05 05:29:15 UTC
upstream patch merged: http://review.gluster.org/#/c/11798/

Comment 5 Anand Nekkunti 2015-10-15 17:26:31 UTC
Due to rebase with 3.7.5 , patch is pulled automatically to 3.1.2 branch , so moving to modified.

Comment 6 Byreddy 2015-10-20 09:53:23 UTC
This bug is verified with the  rhgs version = glusterfs-3.7.5-0.2

Steps Done:
----------
1. Created two node cluster with rhgs 2.1 update 6.
2. Created Distributed and replica volume
3. Done in-service update from 2.1u6 to 3.1.2, one node at a time.
4. Bumped up the Op-version to 30705 ( which is for 3.1.2) and it worked successfully. 
    [root@ ~]# gluster volume set all cluster.op-version 30706
    volume set: success


5. verified the op-version  by querying it.
[root@ ~]# gluster volume get replica  cluster.op-version
Option                                  Value                                   
------                                  -----                                   
cluster.op-version                      30706                                   

[root@ ~]# gluster volume get Dis  cluster.op-version
Option                                  Value                                   
------                                  -----                                   
cluster.op-version                      30706                                   
[root@ ~]# 

Moving to verified state based on above info

Comment 10 errata-xmlrpc 2016-03-01 05:33:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.