Bug 1334174 - gluster volume status volume port become "N/A" after restart storage server nodes
Summary: gluster volume status volume port become "N/A" after restart storage server n...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: 3.6.9
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-09 05:59 UTC by coyang
Modified: 2023-09-14 03:22 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-06-21 12:00:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
all glusterfs related log when problem happen (129.82 KB, application/zip)
2016-05-09 05:59 UTC, coyang
no flags Details

Description coyang 2016-05-09 05:59:07 UTC
Created attachment 1155148 [details]
all glusterfs related log when problem happen

Description of problem:
gluster volume status volume port become "N/A" after restart storage nodes

Version-Release number of selected component (if applicable):
Glusterfs 3.6.9

How reproducible:
Restart 2 storage node in sequence

Steps to Reproduce:
1.restart SN-0
2.Restart SN-1
3.check volume status after both storage node startup

Actual results: port of SN-1 become "N/A" after system recover
# gluster volume status
Status of volume: config
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/config/brick		49156	Y	3696
Brick 169.254.0.8:/mnt/bricks/config/brick		N/A	Y	850
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume config
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: export
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/export/brick		49153	Y	3706
Brick 169.254.0.8:/mnt/bricks/export/brick		N/A	Y	833
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume export
------------------------------------------------------------------------------


Expected results: volume port is same as SN-0
Brick 169.254.0.8:/mnt/bricks/export/brick		N/A	Y	833

Additional info:
glusterfs related log is in attachment.


[root@SN-1(test) /root]
# gluster volume info
 
Volume Name: config
Type: Replicate
Volume ID: 7784c697-ff69-4219-998c-ff85c4a60da5
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.6:/mnt/bricks/config/brick
Brick2: 169.254.0.8:/mnt/bricks/config/brick
Options Reconfigured:
network.ping-timeout: 5
cluster.consistent-metadata: on
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%
 
Volume Name: export
Type: Replicate
Volume ID: f8361996-5611-4649-93db-f57f4b948bb8
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.6:/mnt/bricks/export/brick
Brick2: 169.254.0.8:/mnt/bricks/export/brick
Options Reconfigured:
network.ping-timeout: 5
cluster.consistent-metadata: on
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%
 
Volume Name: home
Type: Replicate
Volume ID: 1cd7014d-bc8c-4da3-b22e-0e512e2d0297
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.6:/mnt/bricks/home/brick
Brick2: 169.254.0.8:/mnt/bricks/home/brick
Options Reconfigured:
network.ping-timeout: 5
cluster.consistent-metadata: on
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%
 
Volume Name: log
Type: Replicate
Volume ID: 18ab077e-4fe9-4c18-af76-e15f33d98410
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.6:/mnt/bricks/log/brick
Brick2: 169.254.0.8:/mnt/bricks/log/brick
Options Reconfigured:
performance.io-thread-count: 16
performance.cache-refresh-timeout: 1
network.ping-timeout: 5
cluster.consistent-metadata: on
cluster.server-quorum-type: none
cluster.server-quorum-ratio: 51%
 
Volume Name: mstate
Type: Replicate
Volume ID: be326d11-3b1e-49ed-b0e5-9d4d9826ebb4
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.6:/mnt/bricks/mstate/brick
Brick2: 169.254.0.8:/mnt/bricks/mstate/brick
Options Reconfigured:
network.ping-timeout: 5
cluster.consistent-metadata: on
cluster.server-quorum-type: none
cluster.server-quorum-ratio: 51%
 
Volume Name: services
Type: Replicate
Volume ID: d58a21bc-817f-4427-8323-d489de9aca32
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.6:/mnt/bricks/services/brick
Brick2: 169.254.0.8:/mnt/bricks/services/brick
Options Reconfigured:
network.ping-timeout: 5
cluster.consistent-metadata: on
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%
[root@SN-1(test) /root]
# gluster volume status
Status of volume: config
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/config/brick		49156	Y	3696
Brick 169.254.0.8:/mnt/bricks/config/brick		N/A	Y	850
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume config
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: export
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/export/brick		49153	Y	3706
Brick 169.254.0.8:/mnt/bricks/export/brick		N/A	Y	833
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume export
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: home
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/home/brick		49155	Y	3701
Brick 169.254.0.8:/mnt/bricks/home/brick		N/A	Y	823
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume home
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: log
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/log/brick			49152	Y	832
Brick 169.254.0.8:/mnt/bricks/log/brick			N/A	Y	840
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume log
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: mstate
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/mstate/brick		49157	Y	857
Brick 169.254.0.8:/mnt/bricks/mstate/brick		N/A	Y	845
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume mstate
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: services
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/services/brick		49154	Y	3711
Brick 169.254.0.8:/mnt/bricks/services/brick		N/A	Y	828
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume services
------------------------------------------------------------------------------
There are no active volume tasks


Check with export for example:

Status of volume: export
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 169.254.0.6:/mnt/bricks/export/brick		49153	Y	3706
Brick 169.254.0.8:/mnt/bricks/export/brick		N/A	Y	833
NFS Server on localhost					N/A	N	N/A
Self-heal Daemon on localhost				N/A	Y	3241
NFS Server on 169.254.0.6				N/A	N	N/A
Self-heal Daemon on 169.254.0.6				N/A	Y	3746
NFS Server on 169.254.0.7				N/A	N	N/A
Self-heal Daemon on 169.254.0.7				N/A	Y	10061
 
Task Status of Volume export

Comment 1 Soumya Koduri 2016-05-10 12:34:26 UTC
Do you see any errors/warnings in the brick logs?

Comment 2 Atin Mukherjee 2016-05-11 05:11:02 UTC
It seems like you have provided logs from 169.254.0.6 whereas the faulty bricks were hosted at 169.254.0.8. Could you please attach the logs for the same?

Comment 3 Atin Mukherjee 2016-06-21 12:00:04 UTC
As I do not hear from the reporter for a month now, I am closing this bug. Feel free to reopen if you can come up with logs along with the reproducer.

Comment 4 Red Hat Bugzilla 2023-09-14 03:22:17 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.