Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.

Bug 1941632

Summary: [RFE] ovsdb-server: Support replication of a _Server database
Product: Red Hat Enterprise Linux Fast Datapath Reporter: Ilya Maximets <i.maximets>
Component: ovsdbAssignee: Ilya Maximets <i.maximets>
Status: CLOSED WONTFIX QA Contact: Jianlin Shi <jishi>
Severity: high Docs Contact:
Priority: high    
Version: FDP 21.CCC: atragler, ctrautma, jhsiao, qding, ralongi
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-19 13:01:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1941615    

Description Ilya Maximets 2021-03-22 14:04:26 UTC
ovsdb-server is able to replicate a standalone database, but it, probably,
doesn't multiple remotes, i.e. not able to replicate a clustered setup.

Need to investigate and implement.
This is required for 2-Tier deployment.

Comment 1 Ilya Maximets 2021-04-19 13:07:49 UTC
After starting an implementation, I realized that it's in the end we will
have to implement replication on top of ovsdb-cs layer in order to bring
all the functionality that is required to correctly handle re-connections
to multiple remotes and keep tracking that we're not replicating a dead
cluster member.  So, instead, the original idea mutated into replication
of internal _Server database, to give a client access to the usual data
from it and allow the client to re-connect to a different server if needed.
This way replication servers will always replicate only one cluster member
regardless of its status and client will see if the replicated server is
still part of the cluster or whatever they need.

The implementation idea is to have special '_synced_Database' table in
a '_Server' database where replication server will store data synced
from the 'Database' table of '_Server' database from the remote server.

Clients will need to be updated to check the server status in this new
'_synced_Database' table, but that is much easier than managing
re-connections in the replication code.  Also, there will be even less
pressure on the main DB cluster, because replication servers will be
stably connected to a single server.

Comment 2 Ilya Maximets 2021-04-19 13:22:30 UTC
First implementation is almost ready.  Missing part is checking of the
cluster status from the client side (in ovsdb-cs).  Server side is functional,
but some documentation needed.  Patches available here:

  * ovsdb: Add extra internal tables to internal databases for replication purposes.
    https://github.com/openvswitch/ovs/commit/85173a0e98cf5d74e1b470b55d5d7d29d195807d

  * replication: Allow replication of _Server database.
    https://github.com/openvswitch/ovs/commit/4b6cbda048c28e43be82e0c4cbbbd443dacab18b

Patches are on top of the transaction forwarding implementation from BZ 1941646.
Will send upstream once missing parts implemented.

Comment 3 Ilya Maximets 2021-05-03 14:33:50 UTC
Patches sent for review as part of the 2-Tier deployment patch-set (patches 2-6):
  https://patchwork.ozlabs.org/project/openvswitch/list/?series=241642&state=*

Comment 4 Ilya Maximets 2021-07-19 13:01:47 UTC
The main 2-Tier deployment solution went in a different direction, i.e.
to implement an OVSDB Relay instead of hacking into active-backup
replication.  See BZ 1941615 for details.  So, this BZ is not needed.

If something like this will be needed in the future, patches in a previous
comment could give an idea and the implementation example.