Description of problem: All added bricks to the same node( multiple bricks in a node) become active geo-rep nodes. Version-Release number of selected component (if applicable):glusterfs-3.4.0.42rhs How reproducible: happens everytime. Steps to Reproduce: 1.create and start a geo-rep relationship between master and slave. 2.add bricks to the existing nodes. 3.check the status Actual results: all the added nodes become the active geo-rep Expected results: only one of the replica pair should be active. Additional info: Volume Name: master Type: Distributed-Replicate Volume ID: e7a3341d-7e14-49b2-976b-25c8537438b7 Status: Started Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: 10.70.43.0:/bricks/brick1 Brick2: 10.70.43.29:/bricks/brick2 Brick3: 10.70.43.40:/bricks/brick3 Brick4: 10.70.43.53:/bricks/brick4 Brick5: 10.70.43.0:/bricks/brick5 Brick6: 10.70.43.40:/bricks/brick6 Brick7: 10.70.43.29:/bricks/brick5 Brick8: 10.70.43.53:/bricks/brick6 Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CRAWL STATUS -------------------------------------------------------------------------------------------------------------- redcell.blr.redhat.com master /bricks/brick1 10.70.43.174::slave Active Changelog Crawl redcell.blr.redhat.com master /bricks/brick5 10.70.42.151::slave Active Changelog Crawl redcloak.blr.redhat.com master /bricks/brick2 10.70.43.76::slave Active Changelog Crawl redcloak.blr.redhat.com master /bricks/brick5 10.70.43.76::slave Active Changelog Crawl redlake.blr.redhat.com master /bricks/brick3 10.70.43.135::slave Active Changelog Crawl redlake.blr.redhat.com master /bricks/brick6 10.70.43.174::slave Active Changelog Crawl redwood.blr.redhat.com master /bricks/brick4 10.70.42.151::slave Passive N/A redwood.blr.redhat.com master /bricks/brick6 10.70.43.135::slave Passive N/A >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Not a recommended deployment scenario to have multiple bricks from the same replica set on the same node. Hence closing this.