Bug 1267488 - [upgrade] Volume status doesn't show proper information when nodes are upgraded from 2.1.6 to 3.1.1
[upgrade] Volume status doesn't show proper information when nodes are upgrad...
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.1.2
Assigned To: hari gowtham
: ZStream
Depends On: 1276587
Blocks: 1260783
  Show dependency treegraph
Reported: 2015-09-30 03:39 EDT by Shashank Raj
Modified: 2016-11-07 22:53 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
The upgraded volume status has separate ports for RDMA and TCP, while the older Gluster version uses one value for the port. Due to this, the lower version does not display all the bricks in volume status command. With this fix, the RDMA port value is assigned to default if the value is not assigned for the lower versions. As the values are available, all the brick will be displayed in the mixed cluster.
Story Points: ---
Clone Of:
Last Closed: 2016-03-01 00:37:23 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
log from the upgraded node (69.60 KB, text/plain)
2015-09-30 03:39 EDT, Shashank Raj
no flags Details

  None (edit)
Description Shashank Raj 2015-09-30 03:39:14 EDT
Created attachment 1078550 [details]
log from the upgraded node

Description of problem:

After the upgrade of one of the replica pair in the cluster from 2.1.6 to 3.1.1, the volume status doesn't show the bricks from the non upgraded nodes.

Version-Release number of selected component (if applicable):
RHGS 2.1 update 6
RHGS 3.1.1 latest

How reproducible:

Steps to Reproduce:
1. Install 2.1.6 on 4 nodes and create a trusted pool.
2. Create a disp. rep and 2 distributed volumes.
3. upgrade 2 nodes of the cluster from 2.1.6 to 3.1.1 (Refer install guide)
4. Observe that once the setup is upgraded, the volume status on the upgraded nodes doesn't show the bricks of the non upgraded nodes.this is observed for all the volumes in the cluster.

Actual results:
observe that once the setup is upgraded, and the cluster is in mixed state, the volume status on the upgraded nodes doesn't show the bricks of the non upgraded nodes.this is observed for all the volumes in the cluster

Expected results:
After the upgrade of the nodes and when the cluster is in mixed state, the volume status should show proper info reflecting all the bricks in the cluster.

Additional info:

logs are attached when the cluster was in mixed state.
Comment 2 hari gowtham 2015-10-07 02:16:13 EDT
This bug has been fixed in the upstream and the link for the upstream bug is:
Comment 4 hari gowtham 2015-10-16 04:16:36 EDT
the patch for this is at https://code.engineering.redhat.com/gerrit/#/c/59169/
Comment 5 hari gowtham 2015-10-16 04:30:43 EDT
the patch was pulled in 3.1.2 from 3.7.5 . So the above link( patch: https://code.engineering.redhat.com/gerrit/#/c/59169/ ) need not be merged as the fix is available.  i'm abandoning the patch hence forth.
Comment 6 Anand Nekkunti 2015-10-16 04:47:37 EDT
from comment 5 , the patch is available in 3.1.2  branch, so moving to on_qa
Comment 7 Byreddy 2015-10-21 06:01:03 EDT
Verification of this bug is blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1271999.

this bug needs volume status check after updating to 3.1.2 but after update one of the node in two node cluster, peer status on updated node shows that not updated node is "Disconnected" so when i issue volume status on the updated node, it shows bricks hosted by updated node only and not non updated node  bricks. 

so this bug will be verified once we get the build with fix of bz-1271999.
Comment 8 Byreddy 2015-11-16 00:08:27 EST
With rhgs version 3.1.2 ( glusterfs-3.7.5-6 ) verified this bug.
Fix is working fine and not seeing the issue reported.
so moving the bug to next state.
Comment 10 errata-xmlrpc 2016-03-01 00:37:23 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.