Bug 865396 - "gluster volume status <volume_name>" command execution outputs the port of brick which is online as "N/A"
"gluster volume status <volume_name>" command execution outputs the port of b...
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
Unspecified Unspecified
medium Severity high
: ---
: ---
Assigned To: Anand Nekkunti
: 971680 (view as bug list)
Depends On:
  Show dependency treegraph
Reported: 2012-10-11 06:37 EDT by spandura
Modified: 2016-01-03 23:50 EST (History)
15 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-04-16 10:31:13 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description spandura 2012-10-11 06:37:38 EDT
Description of problem:
when the storage nodes containing the brick comes online after the reboot of the storage node the port column in the output of the "gluster volume status <volume_name>" command execution shows "N/A"  for a brick. 

Version-Release number of selected component (if applicable):
[10/11/12 - 15:53:58 root@rhs-client7 ~]# rpm -qa | grep gluster

[10/11/12 - 15:54:07 root@rhs-client7 ~]# gluster --version
glusterfs 3.3.0rhsvirt1 built on Oct  8 2012 15:23:00

Steps to Reproduce:
1.Create a pure replicate volume (1x2) with 2 servers and 1 brick on each server. This is the storage for the VM's. start the volume.

2.Set-up the KVM to use the volume as VM store. 

3.Create 2 virtual machines (vm1 and vm2) . start the VM's

4.power off server1  (one of the server from each replicate pair)

5.perform operations on the VM's (rhn_register, yum update, reboot the VM's after the yum update)

6.power on server1.

7.execute "gluster volume status <volume_name>"
Actual results:
[10/11/12 - 15:30:23 root@rhs-client7 ~]# gluster v status replicate-rhevh2
Status of volume: replicate-rhevh2
Gluster process						Port	Online	Pid
Brick rhs-client6.lab.eng.blr.redhat.com:/replicate-dis
k							N/A	Y	2937
Brick rhs-client7.lab.eng.blr.redhat.com:/replicate-dis
k							24013	Y	32385
NFS Server on localhost					38467	Y	10740
Self-heal Daemon on localhost				N/A	Y	10746
NFS Server on rhs-client6.lab.eng.blr.redhat.com	38467	Y	2963
Self-heal Daemon on rhs-client6.lab.eng.blr.redhat.com	N/A	Y	2972
NFS Server on				38467	Y	2406
Self-heal Daemon on				N/A	Y	2412
NFS Server on rhs-client8.lab.eng.blr.redhat.com	38467	Y	7636
Self-heal Daemon on rhs-client8.lab.eng.blr.redhat.com	N/A	Y	7642

[10/11/12 - 15:57:35 root@rhs-client7 ~]# ps -ef | grep glusterfsd

root     12949  8689  0 15:57 pts/0    00:00:00 grep glusterfsd

root     32385     1 10 Oct10 ?        02:23:53 /usr/sbin/glusterfsd -s localhost --volfile-id replicate-rhevh2.rhs-client7.lab.eng.blr.redhat.com.replicate-disk -p /var/lib/glusterd/vols/replicate-rhevh2/run/rhs-client7.lab.eng.blr.redhat.com-replicate-disk.pid -S /tmp/34ce168cca1ffd0f64c69b974431b3a4.socket --brick-name /replicate-disk -l /var/log/glusterfs/bricks/replicate-disk.log --xlator-option *-posix.glusterd-uuid=b9d6cb21-051f-4791-9476-734856e77fbf --brick-port 24013 --xlator-option replicate-rhevh2-server.listen-port=24013

[10/11/12 - 16:00:03 root@rhs-client7 ~]# netstat -alnp | grep 24013
tcp        0      0              ESTABLISHED 10740/glusterfs     
tcp        0      0              ESTABLISHED 14083/glusterfs     
tcp        0      0              ESTABLISHED 10746/glusterfs     
tcp        0      0 :::24013                    :::*                        LISTEN      32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd    
tcp        0      0 ::ffff:    ::ffff:      ESTABLISHED 32385/glusterfsd 

Expected results:
the output of the "volume status" command should contain the port of the brick1 process when the brick process is running 

Additional info:

[10/11/12 - 15:58:34 root@rhs-client7 ~]# gluster volume info replicate-rhevh2
Volume Name: replicate-rhevh2
Type: Replicate
Volume ID: 1e697968-2e90-4589-8225-f596fee8af97
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: rhs-client6.lab.eng.blr.redhat.com:/replicate-disk
Brick2: rhs-client7.lab.eng.blr.redhat.com:/replicate-disk
Options Reconfigured:
storage.linux-aio: disable
cluster.eager-lock: enable
performance.read-ahead: disable
performance.stat-prefetch: disable
performance.io-cache: disable
performance.quick-read: disable
Comment 3 Amar Tumballi 2012-10-12 03:37:28 EDT
happening because of bug 865693
Comment 4 Vijay Bellur 2012-10-19 12:54:33 EDT
Not specific to 2.0+. Removing tag.
Comment 5 Pranith Kumar K 2012-10-31 05:59:58 EDT
Steps to re-create the issue:
1) gluster volume create vol hostA:/brickA hostB:/brickB
2) gluster volume start vol
3) poweroff hostB
4) gluster volume set vol client-log-level DEBUG - on hostA
5) poweron hostB

Reason for the issue:
when gluster volume start in step-2 happens the corresponding brick's store is updated with the port-information which is going to be used for future restarts of the brick.

Because hostB is powered off, when step-4 happens version of the volume on hostA is > version on hostB, so when glusterd comes up in hostB the complete store of the volume is updated to make it equal to the one in hostA. so the brickB's port information is updated with 0 this results to the volume status showing that the brick is online but the port information is N/A.

[root@kernel-compile-2 ~]# gluster volume status
Status of volume: vol
Gluster process						Port	Online	Pid
Brick				24009	Y	1075
Brick				N/A	Y	1024
NFS Server on localhost					38467	Y	1050
NFS Server on				38467	Y	1510
[root@kernel-compile-2 ~]#
Comment 6 Gowrishankar Rajaiyan 2013-04-26 08:02:52 EDT
Not in particular to RHEV-RHS, hence updating summary.
Comment 7 spandura 2014-07-02 07:34:52 EDT
This issue still occurs on "glusterfs built on Jun 23 2014 10:33:07"
Comment 8 SATHEESARAN 2014-11-27 02:35:17 EST
This issue still occurs with RHS 3.0.3 nightly builds. Recently test the same with glusterfs-

There is no functional loss, glusterfsd ( brick processes ) are running, but 'gluster volume status' shown the brick's port as N/A

This is a long standing bug and can we get enough attention atleast for future releases. ( RHS 3.1 ? )
Comment 9 SATHEESARAN 2014-11-27 02:42:06 EST
When this particular node that went down and came up, was managed using RHSC or RHEVM, the brick process on that node with its port marked as N/A was marked as DOWN.

This is more like misleading the admin as the brick processes are still running on that node. So raising the severity to HIGH in this case
Comment 11 Anand Nekkunti 2015-03-24 02:45:34 EDT
It is not reproducible  with latest code ... Please check it  and  close this bug if not reproducible ......
Comment 13 Arun Vasudevan 2015-04-15 12:09:49 EDT
Moving Need info to Bipin Kunal.

Hi Bipin,

Can you check and let us know if the issue is seen in 3.0.4?


Comment 14 Bipin Kunal 2015-04-16 10:24:13 EDT
I was not able to reproduce the issue with 3.0.4 with the reproducer step in C#5.

We can close this bug marking "Fixed in 3.0.4".

Bipin Kunal
Comment 15 Vivek Agarwal 2015-04-16 10:31:13 EDT
Closing per the last comment
Comment 16 Vivek Agarwal 2015-04-20 02:02:41 EDT
*** Bug 971680 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.