Bug 1287992 - [GlusterD]Probing a node having standalone volume, should not happen
[GlusterD]Probing a node having standalone volume, should not happen
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
mainline
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Atin Mukherjee
glusterd
: Triaged
Depends On: 1287951
Blocks: 1279681 1288963
  Show dependency treegraph
 
Reported: 2015-12-03 04:22 EST by Atin Mukherjee
Modified: 2016-06-16 09:48 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1287951
: 1288963 (view as bug list)
Environment:
Last Closed: 2016-06-16 09:48:40 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Atin Mukherjee 2015-12-03 04:22:25 EST
+++ This bug was initially created as a clone of Bug #1287951 +++

Description of problem:
=======================
Done the peer probe from a  node having volume with another node having standalone volume (Diff volume names), peer probe got success, which is not correct, here two nodes are individual clusters so merging of two clusters is not supported.

Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-8

How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Have one node (node-1) cluster with volume (Eg: Distributed type)
2. Have one more node (node-2) cluster with volume in it (Eg: Replica type)
3. Now peer from node-2 from node-1, peer probe will pass 

Actual results:
===============
Peer probe happening with node having volume (unclean node)



Expected results:
=================
Peer probe should not happen with node having volume


Additional info:

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-12-03 01:35:52 EST ---

This bug is automatically being proposed for the current z-stream release of Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Byreddy on 2015-12-03 01:42:35 EST ---

Console Log:
~~~~~~~~~~~~

On Node-1:
==========

[root@dhcp43-183 ~]# gluster peer status
Number of Peers: 0
[root@dhcp43-183 ~]# 
[root@dhcp43-183 ~]# gluster volume create Dis1 10.70.43.183:/bricks/brick0/br0 
volume create: Dis1: success: please start the volume to access data
[root@dhcp43-183 ~]# 
[root@dhcp43-183 ~]# 
[root@dhcp43-183 ~]# gluster volume start Dis1
volume start: Dis1: success
[root@dhcp43-183 ~]# 
[root@dhcp43-183 ~]# gluster volume info
 
Volume Name: Dis1
Type: Distribute
Volume ID: 5c304aec-7941-479c-b984-c52384e5ca50
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.70.43.183:/bricks/brick0/br0
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-183 ~]# 
[root@dhcp43-183 ~]# gluster volume status
Status of volume: Dis1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.183:/bricks/brick0/br0       49169     0          Y       918  
NFS Server on localhost                     2049      0          Y       941  
 
Task Status of Volume Dis1
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp43-183 ~]# 

On Node-2:
==========

[root@dhcp42-65 ~]# gluster peer status
Number of Peers: 0
[root@dhcp42-65 ~]# 
[root@dhcp42-65 ~]# 
[root@dhcp42-65 ~]# 
[root@dhcp42-65 ~]# gluster volume create Dis2 10.70.42.65:/bricks/brick0/br0 
volume create: Dis2: success: please start the volume to access data
[root@dhcp42-65 ~]# gluster volume start Dis2
volume start: Dis2: success
[root@dhcp42-65 ~]# 
[root@dhcp42-65 ~]# gluster volume info
 
Volume Name: Dis2
Type: Distribute
Volume ID: a36c28ed-a4e1-40ea-8199-43de7b209c61
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.70.42.65:/bricks/brick0/br0
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-65 ~]# 
[root@dhcp42-65 ~]# gluster volume status
Status of volume: Dis2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.42.65:/bricks/brick0/br0        49152     0          Y       7447 
NFS Server on localhost                     2049      0          Y       7469 
 
Task Status of Volume Dis2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp42-65 ~]# 

Peer probing node-2 from node-1:
================================
[root@dhcp43-183 ~]# gluster peer probe 10.70.42.65
peer probe: success. 
[root@dhcp43-183 ~]# 
[root@dhcp43-183 ~]# gluster peer status
Number of Peers: 1

Hostname: 10.70.42.65
Uuid: 50a5a82f-08e9-440a-8209-592cb32e18c2
State: Peer in Cluster (Connected)
[root@dhcp43-183 ~]#
Comment 1 Vijay Bellur 2015-12-03 04:28:44 EST
REVIEW: http://review.gluster.org/12864 (glusterd: Disallow peer attach with volumes configured) posted (#1) for review on master by Atin Mukherjee (amukherj@redhat.com)
Comment 2 Vijay Bellur 2015-12-03 23:07:17 EST
REVIEW: http://review.gluster.org/12864 (glusterd: Disallow peer with existing volumes to be probed in cluster) posted (#2) for review on master by Atin Mukherjee (amukherj@redhat.com)
Comment 3 Vijay Bellur 2015-12-07 01:23:13 EST
COMMIT: http://review.gluster.org/12864 committed in master by Kaushal M (kaushal@redhat.com) 
------
commit b1d047caeacbcfac4222759af9d5936b7cfd1d7c
Author: Atin Mukherjee <amukherj@redhat.com>
Date:   Thu Dec 3 14:54:32 2015 +0530

    glusterd: Disallow peer with existing volumes to be probed in cluster
    
    As of now we do allow peer to get added in the trusted storage pool even if it
    has a volume configured. This is definitely not a supported configuration and
    can lead to issues as we never claim to support merging clusters. A single node
    running a standalone volume can be considered as a cluster.
    
    Change-Id: Id0cf42d6e5f20d6bfdb7ee19d860eee67c7c45be
    BUG: 1287992
    Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
    Reviewed-on: http://review.gluster.org/12864
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Kaushal M <kaushal@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 4 Niels de Vos 2016-06-16 09:48:40 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.