Bug 1028674 - Dist-geo-rep : All added bricks to the same node( multiple bricks in a node) become active geo-rep nodes
Dist-geo-rep : All added bricks to the same node( multiple bricks in a node) ...
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-09 06:25 EST by Vijaykumar Koppad
Modified: 2014-12-24 04:25 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-12-24 04:25:38 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-11-09 06:25:26 EST
Description of problem: 
All added bricks to the same node( multiple bricks in a node) become active geo-rep nodes. 

Version-Release number of selected component (if applicable):glusterfs-3.4.0.42rhs


How reproducible: happens everytime. 


Steps to Reproduce:
1.create and start a geo-rep relationship between master and slave.
2.add bricks to the existing nodes. 
3.check the status

Actual results: all the added nodes become the active geo-rep


Expected results: only one of the replica pair should be active.


Additional info:

 Volume Name: master
Type: Distributed-Replicate
Volume ID: e7a3341d-7e14-49b2-976b-25c8537438b7
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.70.43.0:/bricks/brick1
Brick2: 10.70.43.29:/bricks/brick2
Brick3: 10.70.43.40:/bricks/brick3
Brick4: 10.70.43.53:/bricks/brick4
Brick5: 10.70.43.0:/bricks/brick5
Brick6: 10.70.43.40:/bricks/brick6
Brick7: 10.70.43.29:/bricks/brick5
Brick8: 10.70.43.53:/bricks/brick6
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
MASTER NODE                MASTER VOL    MASTER BRICK      SLAVE                  STATUS     CRAWL STATUS           
--------------------------------------------------------------------------------------------------------------
redcell.blr.redhat.com     master        /bricks/brick1    10.70.43.174::slave    Active     Changelog Crawl        
redcell.blr.redhat.com     master        /bricks/brick5    10.70.42.151::slave    Active     Changelog Crawl        
redcloak.blr.redhat.com    master        /bricks/brick2    10.70.43.76::slave     Active     Changelog Crawl        
redcloak.blr.redhat.com    master        /bricks/brick5    10.70.43.76::slave     Active     Changelog Crawl        
redlake.blr.redhat.com     master        /bricks/brick3    10.70.43.135::slave    Active     Changelog Crawl        
redlake.blr.redhat.com     master        /bricks/brick6    10.70.43.174::slave    Active     Changelog Crawl        
redwood.blr.redhat.com     master        /bricks/brick4    10.70.42.151::slave    Passive    N/A                    
redwood.blr.redhat.com     master        /bricks/brick6    10.70.43.135::slave    Passive    N/A   
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Comment 3 Aravinda VK 2014-12-24 04:25:38 EST
Not a recommended deployment scenario to have multiple bricks from the same replica set on the same node. Hence closing this.

Note You need to log in before you can comment on or make changes to this bug.