Bug 1739177 - Glusterd holds incorrect information about the brick port
Summary: Glusterd holds incorrect information about the brick port
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
Target Milestone: ---
: ---
Assignee: Nikhil Ladha
Depends On:
TreeView+ depends on / blocked
Reported: 2019-08-08 16:59 UTC by SATHEESARAN
Modified: 2020-08-17 11:18 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2020-07-08 06:56:33 UTC
Target Upstream Version:

Attachments (Terms of Use)
sosreport-from-grafton10-node (17.32 MB, application/octet-stream)
2019-08-08 17:05 UTC, SATHEESARAN
no flags Details
glusterd.log_from_grafton10 (3.01 MB, application/octet-stream)
2019-08-08 17:08 UTC, SATHEESARAN
no flags Details

Description SATHEESARAN 2019-08-08 16:59:56 UTC
Description of problem:
There were 3 nodes that runs virt + gluster service in RHHI-V environment, when the environment is upgraded with in-service fashion, on one of the node, after reboot, glusterd holds incorrect information regarding the brick port

gluster volume status mentions that the brick is listening to port 49154, but the process is actually listening to 49152

Version-Release number of selected component (if applicable):
RHGS 3.5.0

How reproducible:
Only once

Steps to Reproduce:
1. Created a node with glusterfs-6.0-7 (interim build )
2. Upgraded one of the node in a in-service upgrade fashion, and rebooted the node. Upgraded gluster to 6.0-9
3. Post reboot, checked for output of 'gluster volume heal'

Actual results:
'gluster volume status' indicates that the brick is using the port 49154, but in actual the brick is using the port 49152

Expected results:
glusterd should show consistent result with actual process.

Comment 2 SATHEESARAN 2019-08-08 17:05:35 UTC
Created attachment 1601886 [details]

Comment 3 SATHEESARAN 2019-08-08 17:08:45 UTC
Created attachment 1601888 [details]

Note You need to log in before you can comment on or make changes to this bug.