Bug 1028674 - Dist-geo-rep : All added bricks to the same node( multiple bricks in a node) become active geo-rep nodes
Summary: Dist-geo-rep : All added bricks to the same node( multiple bricks in a node) ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: 2.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-09 11:25 UTC by Vijaykumar Koppad
Modified: 2014-12-24 09:25 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-12-24 09:25:38 UTC
Embargoed:


Attachments (Terms of Use)

Description Vijaykumar Koppad 2013-11-09 11:25:26 UTC
Description of problem: 
All added bricks to the same node( multiple bricks in a node) become active geo-rep nodes. 

Version-Release number of selected component (if applicable):glusterfs-3.4.0.42rhs


How reproducible: happens everytime. 


Steps to Reproduce:
1.create and start a geo-rep relationship between master and slave.
2.add bricks to the existing nodes. 
3.check the status

Actual results: all the added nodes become the active geo-rep


Expected results: only one of the replica pair should be active.


Additional info:

 Volume Name: master
Type: Distributed-Replicate
Volume ID: e7a3341d-7e14-49b2-976b-25c8537438b7
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.0:/bricks/brick1
Brick2: 10.70.43.29:/bricks/brick2
Brick3: 10.70.43.40:/bricks/brick3
Brick4: 10.70.43.53:/bricks/brick4
Brick5: 10.70.43.0:/bricks/brick5
Brick6: 10.70.43.40:/bricks/brick6
Brick7: 10.70.43.29:/bricks/brick5
Brick8: 10.70.43.53:/bricks/brick6
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
MASTER NODE                MASTER VOL    MASTER BRICK      SLAVE                  STATUS     CRAWL STATUS           
--------------------------------------------------------------------------------------------------------------
redcell.blr.redhat.com     master        /bricks/brick1    10.70.43.174::slave    Active     Changelog Crawl        
redcell.blr.redhat.com     master        /bricks/brick5    10.70.42.151::slave    Active     Changelog Crawl        
redcloak.blr.redhat.com    master        /bricks/brick2    10.70.43.76::slave     Active     Changelog Crawl        
redcloak.blr.redhat.com    master        /bricks/brick5    10.70.43.76::slave     Active     Changelog Crawl        
redlake.blr.redhat.com     master        /bricks/brick3    10.70.43.135::slave    Active     Changelog Crawl        
redlake.blr.redhat.com     master        /bricks/brick6    10.70.43.174::slave    Active     Changelog Crawl        
redwood.blr.redhat.com     master        /bricks/brick4    10.70.42.151::slave    Passive    N/A                    
redwood.blr.redhat.com     master        /bricks/brick6    10.70.43.135::slave    Passive    N/A   
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Comment 3 Aravinda VK 2014-12-24 09:25:38 UTC
Not a recommended deployment scenario to have multiple bricks from the same replica set on the same node. Hence closing this.


Note You need to log in before you can comment on or make changes to this bug.