Bug 765431 (GLUSTER-3699)

Summary: [GlusterFS 3.3.0qa14] gluster volume info is getting into an infinite loop
Product: [Community] GlusterFS Reporter: Vijaykumar <vijaykumar>
Component: glusterdAssignee: krishnan parthasarathi <kparthas>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: pre-releaseCC: amarts, gluster-bugs, grajaiya, nsathyan, vijay
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.4.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-07-24 17:59:52 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: glusterfs-3.3.0qa45-1.el6.x86_64 Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 817967    

Description Vijaykumar 2011-10-05 10:49:03 UTC
Gluster volume create succeeds even though same volume name already present , after that if you try volume info , it will go into infinite loop. It is not consistent , but it has happened many times.

Volume Name: replicate_volume
Type: Replicate
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.1.52:/home/vijay/export/exp1
Brick2: 192.168.1.52:/home/vijay/export/exp5
root@vostro:~/atf/shwetha/automation# gluster volume create replicate_volume replica 2 192.168.1.52:/home/vijay/export/exp1 192.168.1.52:/home/vijay/export/exp5
Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
Do you still want to continue creating the volume?  (y/n) y
Brick: 192.168.1.52:/home/vijay/export/exp1 already in use
root@vostro:~/atf/shwetha/automation# gluster volume create replicate_volume replica 2 192.168.1.52:/home/vijay/export/ex1 192.168.1.52:/home/vijay/export/ex5
Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
Do you still want to continue creating the volume?  (y/n) y
Creation of volume replicate_volume has been successful. Please start the volume to access data.

Comment 1 Amar Tumballi 2012-02-27 12:43:17 UTC
please try with master (3.3.0qa24+ versions) and see if it happens again.

Comment 2 Amar Tumballi 2012-05-28 10:44:30 UTC
NeedInfo pending from last 3months. Please confirm, so we can close the bug. (ref: bug 786006 should have mostly fixed this issue.)

Comment 3 Gowrishankar Rajaiyan 2012-05-31 07:46:56 UTC
Unable to create volume with duplicate name.

[root@dhcp201-221 ~]# gluster volume create test1 dhcp201-214.englab.pnq.redhat.com:/export/shanks/shanks4
Volume test1 already exists
[root@dhcp201-221 ~]# 


==> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log <==
[2012-05-31 15:45:46.636127] I [glusterd-volume-ops.c:83:glusterd_handle_create_volume] 0-glusterd: Received create volume req
[2012-05-31 15:45:46.636330] E [glusterd-volume-ops.c:116:glusterd_handle_create_volume] 0-glusterd: Volume test1 already exists


Verified: glusterfs-3.3.0qa45-1.el6.x86_64