Created attachment 620613 [details] server2 log Description of problem: - able to create volume having brick on non existing server ( 'gluster p s' is not showing that server in peer list and IP is invalid) - able to start volume and 'gluster volume info <vol-name>' shows volume status as started - 'gluster volume status <vol-name>' shows pid for that glusterfsd and that process exist on some other server - able to mount that volume Version-Release number of selected component (if applicable): 3.3.0.3rhs-32.el6rhs.x86_64 How reproducible: always Steps to Reproduce: 1. Verify peers in cluster from 10.70.35.81 [root@Rhs1 glusterfs]# gluster p s Number of Peers: 2 Hostname: 10.70.35.86 Uuid: eab6200c-4f12-459e-8d09-2acd87b10b5b State: Peer in Cluster (Disconnected) Hostname: 10.70.35.85 Uuid: bb035a88-8a41-4fea-9e93-caca9a096d0a State: Peer in Cluster (Connected) 2. Create a gluster volume having brick on non existing peer(invalid IP). e.g. '0.70.35.81' is invalid IP and do not exist in network [root@Rhs1 glusterfs]# gluster volume create bug1 10.70.35.81:/home/tt1 10.70.35.85:/t1t 0.70.35.81:/home/11t 10.70.35.85:/home/tt1 Creation of volume bug1 has been successful. Please start the volume to access data. 3. start the volume and check its status [root@Rhs1 glusterfs]# gluster volume info bug1 Volume Name: bug1 Type: Distribute Volume ID: 6bb53cd6-42b5-4c0e-94cb-c61e7f640809 Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: 10.70.35.81:/home/tt1 Brick2: 10.70.35.85:/t1t Brick3: 0.70.35.81:/home/11t Brick4: 10.70.35.85:/home/tt1 [root@Rhs1 glusterfs]# gluster volume status bug1 Status of volume: bug1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.35.81:/home/tt1 24014 Y 28838 Brick 10.70.35.85:/t1t 24217 Y 26215 Brick 0.70.35.81:/home/11t 24218 Y 26221 Brick 10.70.35.85:/home/tt1 24219 Y 26227 NFS Server on localhost 38467 Y 28856 NFS Server on 10.70.35.85 38467 Y 26233 4. on both server list process to find pid server 1 :- [root@Rhs1 glusterfs]# ps -ef | egrep '28838|26215|26221|26227' root 28838 1 2 22:01 ? 00:00:08 /usr/sbin/glusterfsd -s localhost --volfile-id bug1.10.70.35.81.home-tt1 -p /var/lib/glusterd/vols/bug1/run/10.70.35.81-home-tt1.pid -S /tmp/90e68588b557713565f566b47df52457.socket --brick-name /home/tt1 -l /var/log/glusterfs/bricks/home-tt1.log --xlator-option *-posix.glusterd-uuid=74702ea2-e56e-4663-b755-d60b8b1fa988 --brick-port 24014 --xlator-option bug1-server.listen-port=24014 root 28961 28670 0 22:06 pts/5 00:00:00 egrep 28838|26215|26221|26227 server 2 :- [root@Rhs2 ~]# ps -ef | egrep '28838|26215|26221|26227' root 26215 1 0 22:01 ? 00:00:01 /usr/sbin/glusterfsd -s localhost --volfile-id bug1.10.70.35.85.t1t -p /var/lib/glusterd/vols/bug1/run/10.70.35.85-t1t.pid -S /tmp/cbabe8f57030cea7957b43f30cbe43b8.socket --brick-name /t1t -l /var/log/glusterfs/bricks/t1t.log --xlator-option *-posix.glusterd-uuid=bb035a88-8a41-4fea-9e93-caca9a096d0a --brick-port 24217 --xlator-option bug1-server.listen-port=24217 root 26221 1 1 22:01 ? 00:00:06 /usr/sbin/glusterfsd -s localhost --volfile-id bug1.0.70.35.81.home-11t -p /var/lib/glusterd/vols/bug1/run/0.70.35.81-home-11t.pid -S /tmp/2e513e9bebc035634500c20025ec4a8d.socket --brick-name /home/11t -l /var/log/glusterfs/bricks/home-11t.log --xlator-option *-posix.glusterd-uuid=bb035a88-8a41-4fea-9e93-caca9a096d0a --brick-port 24218 --xlator-option bug1-server.listen-port=24218 root 26227 1 0 22:01 ? 00:00:02 /usr/sbin/glusterfsd -s localhost --volfile-id bug1.10.70.35.85.home-tt1 -p /var/lib/glusterd/vols/bug1/run/10.70.35.85-home-tt1.pid -S /tmp/c5237fd7133364811607fc0ddd5b0ed4.socket --brick-name /home/tt1 -l /var/log/glusterfs/bricks/home-tt1.log --xlator-option *-posix.glusterd-uuid=bb035a88-8a41-4fea-9e93-caca9a096d0a --brick-port 24219 --xlator-option bug1-server.listen-port=24219 root 26358 26346 0 22:09 pts/1 00:00:00 egrep 28838|26215|26221|26227 ### according to status Brick 0.70.35.81:/home/11t 24218 Y 26221 '26221' is glusterfsd pid on non existing server but that process is present on server 2 5. from client mount this volume client:- [root@client glusterfs]# mount -t glusterfs 10.70.35.81:/bug1 /mnt/bug1/ [root@client glusterfs]# mount | grep bug1 10.70.35.81:/bug1 on /mnt/bug1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) [root@client glusterfs]# cd /mnt/bug1/ [root@client bug1]# ls Actual results: Even though one of the brick is on non existing peer(having invalid IP); able to create, start and mount gluster volume. Expected results: It should not create gluster volume Additional info:
Created attachment 620614 [details] server1 log
Created attachment 620615 [details] mnt-log
CHANGE: http://review.gluster.org/#change,3865 fixes this issue.(the 3rd address 0.* is treated as localhost address).
*** This bug has been marked as a duplicate of bug 787627 ***