Bug 1028674

Summary: Dist-geo-rep : All added bricks to the same node( multiple bricks in a node) become active geo-rep nodes
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vijaykumar Koppad <vkoppad>
Component: geo-replicationAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED WONTFIX QA Contact: storage-qa-internal <storage-qa-internal>
Severity: high Docs Contact:
Priority: unspecified    
Version: 2.1CC: aavati, avishwan, csaba, david.macdonald
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-12-24 09:25:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vijaykumar Koppad 2013-11-09 11:25:26 UTC
Description of problem: 
All added bricks to the same node( multiple bricks in a node) become active geo-rep nodes. 

Version-Release number of selected component (if applicable):glusterfs-3.4.0.42rhs


How reproducible: happens everytime. 


Steps to Reproduce:
1.create and start a geo-rep relationship between master and slave.
2.add bricks to the existing nodes. 
3.check the status

Actual results: all the added nodes become the active geo-rep


Expected results: only one of the replica pair should be active.


Additional info:

 Volume Name: master
Type: Distributed-Replicate
Volume ID: e7a3341d-7e14-49b2-976b-25c8537438b7
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.0:/bricks/brick1
Brick2: 10.70.43.29:/bricks/brick2
Brick3: 10.70.43.40:/bricks/brick3
Brick4: 10.70.43.53:/bricks/brick4
Brick5: 10.70.43.0:/bricks/brick5
Brick6: 10.70.43.40:/bricks/brick6
Brick7: 10.70.43.29:/bricks/brick5
Brick8: 10.70.43.53:/bricks/brick6
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
MASTER NODE                MASTER VOL    MASTER BRICK      SLAVE                  STATUS     CRAWL STATUS           
--------------------------------------------------------------------------------------------------------------
redcell.blr.redhat.com     master        /bricks/brick1    10.70.43.174::slave    Active     Changelog Crawl        
redcell.blr.redhat.com     master        /bricks/brick5    10.70.42.151::slave    Active     Changelog Crawl        
redcloak.blr.redhat.com    master        /bricks/brick2    10.70.43.76::slave     Active     Changelog Crawl        
redcloak.blr.redhat.com    master        /bricks/brick5    10.70.43.76::slave     Active     Changelog Crawl        
redlake.blr.redhat.com     master        /bricks/brick3    10.70.43.135::slave    Active     Changelog Crawl        
redlake.blr.redhat.com     master        /bricks/brick6    10.70.43.174::slave    Active     Changelog Crawl        
redwood.blr.redhat.com     master        /bricks/brick4    10.70.42.151::slave    Passive    N/A                    
redwood.blr.redhat.com     master        /bricks/brick6    10.70.43.135::slave    Passive    N/A   
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Comment 3 Aravinda VK 2014-12-24 09:25:38 UTC
Not a recommended deployment scenario to have multiple bricks from the same replica set on the same node. Hence closing this.