Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1505433 - [GSS]Brick port mismatch
[GSS]Brick port mismatch
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.3
Unspecified Unspecified
urgent Severity urgent
: ---
: RHGS 3.3.1
Assigned To: Gaurav Yadav
Bala Konda Reddy M
: ZStream
Depends On:
Blocks: 1475688 1506589 1507748 1507752
  Show dependency treegraph
 
Reported: 2017-10-23 10:46 EDT by Abhishek Kumar
Modified: 2017-11-28 22:31 EST (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-51
Doc Type: Bug Fix
Doc Text:
Rebooting or restarting glusterd service on a node did not retain the brick port information. This resulted in a mismatch of port information in the ‘gluster volume status’ command, and the actual port that the brick process uses. With this fix, glusterd persists brick port information on every brick restart and thus avoids port mismatch.
Story Points: ---
Clone Of:
: 1506589 (view as bug list)
Environment:
Last Closed: 2017-11-28 22:31:38 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:3276 normal SHIPPED_LIVE glusterfs bug fix update 2017-11-29 03:28:52 EST

  None (edit)
Description Abhishek Kumar 2017-10-23 10:46:32 EDT
Description of problem:

After force start of the volume, brick port mismatch between volume status and ps output.

Version-Release number of selected component (if applicable):

RHGS 3.3

How reproducible:

Customer Environment

Actual results:

Brick port is different in both volume status and ps output.


Expected results:


Brick port should be same in both volume status and ps output.

Additional info:
Comment 36 Bala Konda Reddy M 2017-11-10 04:30:05 EST
verified : 3.8.4-51

Before creating any volume on the cluster triggered a program to run on port 49152

Then created a volume and started it, the brick process start from the port 49153 as 49152 is already used by another program.

Terminated the program which is running on 49152 port

I brought down the bricks on the node, started the volume with force. The bricks are online and ports are shown same in the volume status and in ps output starting from 49152.

for 10.70.37.104:/bricks/brick0/testvol_brick0 the port is "49152" in volume status

[root@dhcp37-104 home]# gluster vol status
Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.104:/bricks/brick0/testvol_b
rick0                                       49152     0          Y       2648 
Brick 10.70.37.92:/bricks/brick0/testvol_br
ick1                                        49152     0          Y       21085
Brick 10.70.37.100:/bricks/brick0/testvol_b
rick2                                       49152     0          Y       23611
Brick 10.70.37.81:/bricks/brick0/testvol_br
ick3                                        49152     0          Y       19525
Brick 10.70.37.104:/bricks/brick1/testvol_b
rick4                                       49155     0          Y       2654 
Brick 10.70.37.92:/bricks/brick1/testvol_br
ick5                                        49153     0          Y       21104
Self-heal Daemon on localhost               N/A       N/A        Y       2637 
Self-heal Daemon on dhcp37-81.lab.eng.blr.r
edhat.com                                   N/A       N/A        Y       19545
Self-heal Daemon on dhcp37-92.lab.eng.blr.r
edhat.com                                   N/A       N/A        Y       21124
Self-heal Daemon on dhcp37-100.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       23631
 
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks


ps output of that particular brick

root      2648     1  0 04:19 ?        00:00:00 /usr/sbin/glusterfsd -s 10.70.37.104 --volfile-id testvol.10.70.37.104.bricks-brick0-testvol_brick0 -p /var/run/gluster/vols/testvol/10.70.37.104-bricks-brick0-testvol_brick0.pid -S /var/run/gluster/725f696f45b00be8e7e22058236a66d5.socket --brick-name /bricks/brick0/testvol_brick0 -l /var/log/glusterfs/bricks/bricks-brick0-testvol_brick0.log --xlator-option *-posix.glusterd-uuid=bce30431-b159-4a13-a115-0b2d5f85bc02 --brick-port 49152 --xlator-option testvol-server.listen-port=49152


Hence marking the bug as verified
Comment 44 errata-xmlrpc 2017-11-28 22:31:38 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3276

Note You need to log in before you can comment on or make changes to this bug.