Description of problem: After force start of the volume, brick port mismatch between volume status and ps output. Version-Release number of selected component (if applicable): RHGS 3.3 How reproducible: Customer Environment Actual results: Brick port is different in both volume status and ps output. Expected results: Brick port should be same in both volume status and ps output. Additional info:
verified : 3.8.4-51 Before creating any volume on the cluster triggered a program to run on port 49152 Then created a volume and started it, the brick process start from the port 49153 as 49152 is already used by another program. Terminated the program which is running on 49152 port I brought down the bricks on the node, started the volume with force. The bricks are online and ports are shown same in the volume status and in ps output starting from 49152. for 10.70.37.104:/bricks/brick0/testvol_brick0 the port is "49152" in volume status [root@dhcp37-104 home]# gluster vol status Status of volume: testvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.104:/bricks/brick0/testvol_b rick0 49152 0 Y 2648 Brick 10.70.37.92:/bricks/brick0/testvol_br ick1 49152 0 Y 21085 Brick 10.70.37.100:/bricks/brick0/testvol_b rick2 49152 0 Y 23611 Brick 10.70.37.81:/bricks/brick0/testvol_br ick3 49152 0 Y 19525 Brick 10.70.37.104:/bricks/brick1/testvol_b rick4 49155 0 Y 2654 Brick 10.70.37.92:/bricks/brick1/testvol_br ick5 49153 0 Y 21104 Self-heal Daemon on localhost N/A N/A Y 2637 Self-heal Daemon on dhcp37-81.lab.eng.blr.r edhat.com N/A N/A Y 19545 Self-heal Daemon on dhcp37-92.lab.eng.blr.r edhat.com N/A N/A Y 21124 Self-heal Daemon on dhcp37-100.lab.eng.blr. redhat.com N/A N/A Y 23631 Task Status of Volume testvol ------------------------------------------------------------------------------ There are no active volume tasks ps output of that particular brick root 2648 1 0 04:19 ? 00:00:00 /usr/sbin/glusterfsd -s 10.70.37.104 --volfile-id testvol.10.70.37.104.bricks-brick0-testvol_brick0 -p /var/run/gluster/vols/testvol/10.70.37.104-bricks-brick0-testvol_brick0.pid -S /var/run/gluster/725f696f45b00be8e7e22058236a66d5.socket --brick-name /bricks/brick0/testvol_brick0 -l /var/log/glusterfs/bricks/bricks-brick0-testvol_brick0.log --xlator-option *-posix.glusterd-uuid=bce30431-b159-4a13-a115-0b2d5f85bc02 --brick-port 49152 --xlator-option testvol-server.listen-port=49152 Hence marking the bug as verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3276