Bug 985211 - [RFE] Dist-geo-rep : geo-rep syncs from all the replica-bricks, if there are multiple bricks of different sub-volume are in the same machine.
[RFE] Dist-geo-rep : geo-rep syncs from all the replica-bricks, if there are ...
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
low Severity medium
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-17 02:28 EDT by Vijaykumar Koppad
Modified: 2014-12-24 01:24 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-12-24 01:24:17 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-07-17 02:28:57 EDT
Description of problem: If there are multiple bricks of different sub-volume in the same machine, geo-rep syncs from all the replica pairs. 

Volume info, 

Volume Name: master
Type: Distributed-Replicate
Volume ID: 45ad14e4-2724-42b8-ba75-f9037126d13c
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: machine1:/bricks/brick1
Brick2: machine2:/bricks/brick2
Brick3: machine3:/bricks/brick3
Brick4: machine4:/bricks/brick4
Brick5: machine2:/bricks/brick5
Brick6: machine3:/bricks/brick6



Version-Release number of selected component (if applicable):3.4.0.12rhs.beta4-1.el6rhs.x86_64


How reproducible:Happens every time


Steps to Reproduce:
1.Create and start a geo-rep session between master( config as shown above) and slave
2.create some files in the master.
3.Check the geo-rep logs in DEBUG mode, to check it is syncing from which bricks.

Actual results: Geo-rep syncs from all the replica bricks.


Expected results: geo-rep actually should sync from only one replica brick. One acting as active and another as passive. 


Additional info:
logs from machine1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[2013-07-17 11:38:57.405512] D [repce(/bricks/brick1):175:push] RepceClient: call 12918:140297598121728:1374041337.41
entry_ops([{'gfid': 'ec87093b-6ea0-41a5-9383-8bfcd659ff95', 'entry': '.gfid/00000000-0000-0000-0000-000000000001/file5
', 'stat': {'gid': 0, 'uid': 0, 'mode': 33188}, 'op': 'CREATE'}, {'gfid': '99aa70c2-160c-4bee-bd14-12396fc843c7', 'ent
ry': '.gfid/00000000-0000-0000-0000-000000000001/file6', 'stat': {'gid': 0, 'uid': 0, 'mode': 33188}, 'op': 'CREATE'},
 {'gfid': 'a6176f29-af26-4e93-b14e-32d63244c957', 'entry': '.gfid/00000000-0000-0000-0000-000000000001/file8', 'stat':
 {'gid': 0, 'uid': 0, 'mode': 33188}, 'op': 'CREATE'}],) ...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

logs from machine2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[2013-07-17 11:39:03.993156] D [master(/bricks/brick2):896:process_change] _GMaster: entries: [{'gfid': 'ec87093b-6ea0-41a5-9383-8bfcd659ff95', 'entry': '.gfid/00000000-0000-0000-0000-000000000001/file5', 'stat': {'gid': 0, 'uid': 0, 'mode': 33188}, 'op': 'CREATE'}, {'gfid': '99aa70c2-160c-4bee-bd14-12396fc843c7', 'entry': '.gfid/00000000-0000-0000-0000-000000000001/file6', 'stat': {'gid': 0, 'uid': 0, 'mode': 33188}, 'op': 'CREATE'}, {'gfid': 'a6176f29-af26-4e93-b14e-32d63244c957', 'entry': '.gfid/00000000-0000-0000-0000-000000000001/file8', 'stat': {'gid': 0, 'uid': 0, 'mode': 33188}, 'op': 'CREATE'}]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

you can see, both brick1 and brick2 are syncing same files, which are actually replica pairs.
Comment 2 Amar Tumballi 2013-08-01 06:20:07 EDT
reducing priority because of use-case not being very practical. (!optimal setup)
Comment 4 Aravinda VK 2014-12-24 01:24:17 EST
Not a relevant deployment scenario to have bricks from the same replica group in the same node. Hence closing this.

Note You need to log in before you can comment on or make changes to this bug.