Bug 862494 - Even though one of the brick is on non existing peer(having invalid IP); able to create, start and mount gluster volume.
Even though one of the brick is on non existing peer(having invalid IP); abl...
Status: CLOSED DUPLICATE of bug 787627
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
x86_64 Linux
high Severity medium
: ---
: ---
Assigned To: vpshastry
Depends On:
  Show dependency treegraph
Reported: 2012-10-03 00:22 EDT by Rachana Patel
Modified: 2015-04-20 07:56 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-10-25 03:22:16 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
server2 log (322.86 KB, text/x-log)
2012-10-03 00:22 EDT, Rachana Patel
no flags Details
server1 log (318.28 KB, application/octet-stream)
2012-10-03 00:23 EDT, Rachana Patel
no flags Details
mnt-log (41.07 KB, text/x-log)
2012-10-03 00:23 EDT, Rachana Patel
no flags Details

  None (edit)
Description Rachana Patel 2012-10-03 00:22:35 EDT
Created attachment 620613 [details]
server2 log

Description of problem:

- able to create volume having brick on non existing server ( 'gluster p s' is not showing that server in peer list and IP is invalid)
- able to start volume and 'gluster volume info <vol-name>' shows volume status as started 
- 'gluster volume status <vol-name>' shows pid for that glusterfsd
and that process exist on some other server
- able to mount that volume

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Verify peers in cluster

[root@Rhs1 glusterfs]# gluster p s 
Number of Peers: 2 

Uuid: eab6200c-4f12-459e-8d09-2acd87b10b5b 
State: Peer in Cluster (Disconnected) 

Uuid: bb035a88-8a41-4fea-9e93-caca9a096d0a 
State: Peer in Cluster (Connected) 

2. Create a gluster volume having brick on non existing peer(invalid IP). e.g. '' is invalid IP and do not exist in network

[root@Rhs1 glusterfs]# gluster volume create bug1 
Creation of volume bug1 has been successful. Please start the volume to access data. 

3. start the volume and check its status

[root@Rhs1 glusterfs]# gluster volume info bug1 
Volume Name: bug1 
Type: Distribute 
Volume ID: 6bb53cd6-42b5-4c0e-94cb-c61e7f640809 
Status: Started 
Number of Bricks: 4 
Transport-type: tcp 

[root@Rhs1 glusterfs]# gluster volume status bug1 
Status of volume: bug1 
Gluster process						Port	Online	Pid 
Brick				24014	Y	28838 
Brick					24217	Y	26215 
Brick				24218	Y	26221 
Brick				24219	Y	26227 
NFS Server on localhost					38467	Y	28856 
NFS Server on				38467	Y	26233

4. on both server list process to find pid

server 1 :-
[root@Rhs1 glusterfs]# ps -ef | egrep '28838|26215|26221|26227' 

root     28838     1  2 22:01 ?        00:00:08 /usr/sbin/glusterfsd -s localhost --volfile-id bug1. -p /var/lib/glusterd/vols/bug1/run/ -S /tmp/90e68588b557713565f566b47df52457.socket --brick-name /home/tt1 -l /var/log/glusterfs/bricks/home-tt1.log --xlator-option *-posix.glusterd-uuid=74702ea2-e56e-4663-b755-d60b8b1fa988 --brick-port 24014 --xlator-option bug1-server.listen-port=24014 

root     28961 28670  0 22:06 pts/5    00:00:00 egrep 28838|26215|26221|26227 

server 2 :-
[root@Rhs2 ~]# ps -ef | egrep '28838|26215|26221|26227' 

root     26215     1  0 22:01 ?        00:00:01 /usr/sbin/glusterfsd -s localhost --volfile-id bug1. -p /var/lib/glusterd/vols/bug1/run/ -S /tmp/cbabe8f57030cea7957b43f30cbe43b8.socket --brick-name /t1t -l /var/log/glusterfs/bricks/t1t.log --xlator-option *-posix.glusterd-uuid=bb035a88-8a41-4fea-9e93-caca9a096d0a --brick-port 24217 --xlator-option bug1-server.listen-port=24217 

root     26221     1  1 22:01 ?        00:00:06 /usr/sbin/glusterfsd -s localhost --volfile-id bug1. -p /var/lib/glusterd/vols/bug1/run/ -S /tmp/2e513e9bebc035634500c20025ec4a8d.socket --brick-name /home/11t -l /var/log/glusterfs/bricks/home-11t.log --xlator-option *-posix.glusterd-uuid=bb035a88-8a41-4fea-9e93-caca9a096d0a --brick-port 24218 --xlator-option bug1-server.listen-port=24218 

root     26227     1  0 22:01 ?        00:00:02 /usr/sbin/glusterfsd -s localhost --volfile-id bug1. -p /var/lib/glusterd/vols/bug1/run/ -S /tmp/c5237fd7133364811607fc0ddd5b0ed4.socket --brick-name /home/tt1 -l /var/log/glusterfs/bricks/home-tt1.log --xlator-option *-posix.glusterd-uuid=bb035a88-8a41-4fea-9e93-caca9a096d0a --brick-port 24219 --xlator-option bug1-server.listen-port=24219 

root     26358 26346  0 22:09 pts/1    00:00:00 egrep 28838|26215|26221|26227 

according to status 
Brick				24218	Y	26221
'26221' is glusterfsd pid  on non existing server but that process is present on server 2

5. from client mount this volume

[root@client glusterfs]# mount -t glusterfs /mnt/bug1/ 
[root@client glusterfs]# mount | grep bug1 on /mnt/bug1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) 
[root@client glusterfs]# cd /mnt/bug1/ 
[root@client bug1]# ls 

Actual results:
Even though one of the brick is on  non existing peer(having invalid IP); able to create, start and mount gluster volume.

Expected results:
It should not create gluster volume

Additional info:
Comment 1 Rachana Patel 2012-10-03 00:23:30 EDT
Created attachment 620614 [details]
server1 log
Comment 2 Rachana Patel 2012-10-03 00:23:58 EDT
Created attachment 620615 [details]
Comment 4 vpshastry 2012-10-22 07:10:10 EDT
CHANGE: http://review.gluster.org/#change,3865 fixes this issue.(the 3rd address 0.* is treated as localhost address).
Comment 5 Vijay Bellur 2012-10-25 03:22:16 EDT

*** This bug has been marked as a duplicate of bug 787627 ***

Note You need to log in before you can comment on or make changes to this bug.