Bug 1235976 - glusterd: Multiple UUID's of same peer causing locking issue in cluster.
Summary: glusterd: Multiple UUID's of same peer causing locking issue in cluster.
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: unspecified
Hardware: Unspecified
OS: Linux
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Anand Nekkunti
QA Contact: Amit Chaurasia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-06-26 09:03 UTC by Amit Chaurasia
Modified: 2018-01-15 07:10 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-17 07:10:28 UTC
Embargoed:


Attachments (Terms of Use)

Description Amit Chaurasia 2015-06-26 09:03:29 UTC
Description of problem:
One peer is registered with 2 UUID's in another node causing locking issue. 

[root@dht-rhs-21 ~]# gluster peer statusNumber of Peers: 2

Hostname: 10.70.47.113
Uuid: 7451bd31-de4f-4900-b68c-7bf22e6b479b
State: Peer in Cluster (Connected)

Hostname: 10.70.47.113
Uuid: 7451bd31-de4f-4900-b68c-7bf22e6b479b
State: Peer in Cluster (Connected)


Version-Release number of selected component (if applicable):



How reproducible:
Happened once.


Steps to Reproduce:
1. Added a set of bricks in a loop with 30 seconds interval.
2. Few Bricks failed to commit causing some bricks not available from second node. 
3. Deleted the volume. 
4. Rebooted the systems.
5. Executed iptables -F.
6. Created new FS on the bricks and recreated the volume.
7. While set some cluster options, got the error of locking. 

Actual results:
1. Setting cluster options failed due to locking issue.
2. One of the Nodes have two UUID'S.

Expected results:
1. There should be one UUID per peer.
2. No locking issue blocking the cluster operations.

Additional info:
1. I left the cluster overnight and when added the cluster options, it was successful without doing any other additional steps.

Comment 2 Amit Chaurasia 2015-06-26 09:27:03 UTC
Gluster nodes:

dht-rhs-21 : 10.70.47.101.

dht-rhs-22 : 10.70.47.113.


Clients : dht-rhs-21 & dht-rhs-22 & my laptop.


[root@dht-rhs-21 ~]# rpm -qa | grep gluster
glusterfs-cli-3.7.1-4.el6rhs.x86_64
glusterfs-libs-3.7.1-4.el6rhs.x86_64
glusterfs-fuse-3.7.1-4.el6rhs.x86_64
glusterfs-3.7.1-4.el6rhs.x86_64
glusterfs-server-3.7.1-4.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-4.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-4.el6rhs.x86_64
glusterfs-api-3.7.1-4.el6rhs.x86_64
[root@dht-rhs-21 ~]# 

[amit@amit-lappy ~]$ rpm -qa  |grep -i gluster
glusterfs-libs-3.5.3-1.fc21.x86_64
glusterfs-fuse-3.5.3-1.fc21.x86_64
glusterfs-3.5.3-1.fc21.x86_64
glusterfs-api-3.5.3-1.fc21.x86_64
[amit@amit-lappy ~]$ 

[root@dht-rhs-21 ~]# mount && df -h
/dev/vda2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/vda1 on /boot type ext4 (rw)
/dev/mapper/snap_vg0-Lvol1 on /bricks/brick0 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol2 on /bricks/brick1 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol3 on /bricks/brick2 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol4 on /bricks/brick3 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol5 on /bricks/brick4 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol6 on /bricks/brick5 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol7 on /bricks/brick6 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol8 on /bricks/brick7 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol9 on /bricks/brick8 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol10 on /bricks/brick9 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol11 on /bricks/brick10 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol12 on /bricks/brick11 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol13 on /bricks/brick12 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol14 on /bricks/brick13 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol15 on /bricks/brick14 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol16 on /bricks/brick15 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol17 on /bricks/brick16 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol18 on /bricks/brick17 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol19 on /bricks/brick18 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol20 on /bricks/brick19 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol21 on /bricks/brick20 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol22 on /bricks/brick21 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol23 on /bricks/brick22 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol24 on /bricks/brick23 type xfs (rw,noatime,nodiratime,inode64)
/dev/mapper/snap_vg0-Lvol25 on /bricks/brick24 type xfs (rw,noatime,nodiratime,inode64)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda2              34G  3.3G   29G  11% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/vda1             477M   99M  353M  22% /boot
/dev/mapper/snap_vg0-Lvol1
                       10G   33M   10G   1% /bricks/brick0
/dev/mapper/snap_vg0-Lvol2
                       10G   33M   10G   1% /bricks/brick1
/dev/mapper/snap_vg0-Lvol3
                       10G   33M   10G   1% /bricks/brick2
/dev/mapper/snap_vg0-Lvol4
                       10G   33M   10G   1% /bricks/brick3
/dev/mapper/snap_vg0-Lvol5
                       10G   33M   10G   1% /bricks/brick4
/dev/mapper/snap_vg0-Lvol6
                       10G   33M   10G   1% /bricks/brick5
/dev/mapper/snap_vg0-Lvol7
                       10G   33M   10G   1% /bricks/brick6
/dev/mapper/snap_vg0-Lvol8
                       10G   33M   10G   1% /bricks/brick7
/dev/mapper/snap_vg0-Lvol9
                       10G   33M   10G   1% /bricks/brick8
/dev/mapper/snap_vg0-Lvol10
                       10G   33M   10G   1% /bricks/brick9
/dev/mapper/snap_vg0-Lvol11
                       10G   33M   10G   1% /bricks/brick10
/dev/mapper/snap_vg0-Lvol12
                       10G   33M   10G   1% /bricks/brick11
/dev/mapper/snap_vg0-Lvol13
                       10G   33M   10G   1% /bricks/brick12
/dev/mapper/snap_vg0-Lvol14
                       10G   33M   10G   1% /bricks/brick13
/dev/mapper/snap_vg0-Lvol15
                       10G   33M   10G   1% /bricks/brick14
/dev/mapper/snap_vg0-Lvol16
                       10G   33M   10G   1% /bricks/brick15
/dev/mapper/snap_vg0-Lvol17
                       10G   33M   10G   1% /bricks/brick16
/dev/mapper/snap_vg0-Lvol18
                       10G   33M   10G   1% /bricks/brick17
/dev/mapper/snap_vg0-Lvol19
                       10G   33M   10G   1% /bricks/brick18
/dev/mapper/snap_vg0-Lvol20
                       10G   33M   10G   1% /bricks/brick19
/dev/mapper/snap_vg0-Lvol21
                       10G   33M   10G   1% /bricks/brick20
/dev/mapper/snap_vg0-Lvol22
                       10G   33M   10G   1% /bricks/brick21
/dev/mapper/snap_vg0-Lvol23
                       10G   33M   10G   1% /bricks/brick22
/dev/mapper/snap_vg0-Lvol24
                       10G   33M   10G   1% /bricks/brick23
/dev/mapper/snap_vg0-Lvol25
                       10G   33M   10G   1% /bricks/brick24
[root@dht-rhs-21 ~]# 



[root@dht-rhs-21 ~]# gluster peer status
Number of Peers: 2

Hostname: 10.70.47.113
Uuid: 7451bd31-de4f-4900-b68c-7bf22e6b479b
State: Peer in Cluster (Connected)

Hostname: 10.70.47.113
Uuid: 7451bd31-de4f-4900-b68c-7bf22e6b479b
State: Peer in Cluster (Connected)
[root@dht-rhs-21 ~]# 
[root@dht-rhs-21 ~]# 

[root@dht-rhs-21 ~]# cat /var/lib/glusterd/glusterd.info 
UUID=bc9a896d-c207-4c98-aa99-a117fe3aeb47
operating-version=30702
[root@dht-rhs-21 ~]# 
[root@dht-rhs-21 ~]# 

[root@dht-rhs-22 ~]# cat  /var/lib/glusterd/glusterd.info 
UUID=7451bd31-de4f-4900-b68c-7bf22e6b479b
operating-version=30702
[root@dht-rhs-22 ~]#

[root@dht-rhs-22 ~]# ls -ltrh /var/lib/glusterd/peers/
total 4.0K
-rw-------. 1 root root 73 Jun 25 21:40 bc9a896d-c207-4c98-aa99-a117fe3aeb47
[root@dht-rhs-22 ~]# 

[root@dht-rhs-21 ~]# ls -ltrh /var/lib/glusterd/peers/
total 8.0K
-rw-------. 1 root root 73 Jun 25 18:10 7451bd31-de4f-4900-b68c-7bf22e6b479b
-rw-------. 1 root root 73 Jun 25 21:40 0f67cb15-935f-470f-99c7-a16ebf3b774a
[root@dht-rhs-21 ~]#

Comment 3 Amit Chaurasia 2015-06-26 09:32:42 UTC
Loga are copied @http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1235976/

Comment 5 Atin Mukherjee 2015-07-17 07:10:28 UTC
Since this bug is no more reproducible closing it, feel free to reopen if you happen to hit it.


Note You need to log in before you can comment on or make changes to this bug.