test steps: 1. install ovn2.11.1-20 with schema 5.16.0 2. copy older 5.15.0 schema files(https://github.com/openvswitch/ovs/commit/1be1e0e5e0d1c2e996362de9073cf9709e1ba93b#diff-90d020781f83e2f707edb26de0b5c552) to /usr/share/openvswitch 3. start pcs ovn on two nodes, master A and slave B iptables -F setenforce 0 (sleep 2;echo "hacluster"; sleep 2; echo "redhat" ) | pcs host auth 11.1.1.2 11.1.1.11 pcs cluster setup my_cluster --force --start 11.1.1.2 11.1.1.11 pcs cluster enable --all pcs property set stonith-enabled=false pcs property set no-quorum-policy=ignore pcs cluster cib tmp-cib.xml cp tmp-cib.xml tmp-cib.deltasrc pcs status pcs -f tmp-cib.xml resource create ip-11.1.1.50 ocf:heartbeat:IPaddr2 ip=11.1.1.50 op monitor interval=30s pcs -f tmp-cib.xml resource create ovndb_servers ocf:ovn:ovndb-servers manage_northd=yes master_ip=11.1.1.50 nb_master_port=6641 sb_master_port=6642 promotable pcs -f tmp-cib.xml resource meta ovndb_servers-clone notify=true pcs -f tmp-cib.xml constraint order start ip-11.1.1.50 then promote ovndb_servers-clone pcs -f tmp-cib.xml constraint colocation add ip-11.1.1.50 with master ovndb_servers-clone pcs -f tmp-cib.xml constraint location ip-11.1.1.50 prefers 11.1.1.2=1000 pcs -f tmp-cib.xml constraint location ovndb_servers-clone prefers 11.1.1.2=1000 pcs -f tmp-cib.xml constraint location ip-11.1.1.50 prefers 11.1.1.11=500 pcs -f tmp-cib.xml constraint location ovndb_servers-clone prefers 11.1.1.11=500 pcs cluster cib-push tmp-cib.xml diff-against=tmp-cib.deltasrc pcs status 4. stop master A, and add logical switch on B 5. cp 5.16.0 schema files to /usr/share/openvswitch 6. restart node A 7. show logical switch on node A reproduced on openvswitch2.12.0-4: [root@ibm-x3650m5-03 bz1775795]# pcs status Cluster name: my_cluster Cluster Summary: * Stack: corosync * Current DC: 11.1.1.11 (version 2.0.3-2.el8-2c9cea563e) - partition with quorum * Last updated: Thu Nov 28 21:51:49 2019 * Last change: Thu Nov 28 21:51:41 2019 by root via crm_attribute on 11.1.1.11 * 2 nodes configured * 3 resource instances configured Node List: * Online: [ 11.1.1.11 ] * OFFLINE: [ 11.1.1.2 ] Full List of Resources: * ip-11.1.1.50 (ocf::heartbeat:IPaddr2): Started 11.1.1.11 * Clone Set: ovndb_servers-clone [ovndb_servers] (promotable): * Masters: [ 11.1.1.11 ] * Stopped: [ 11.1.1.2 ] Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/disabled [root@ibm-x3650m5-03 bz1775795]# ovn-nbctl show 2019-11-29T02:51:52Z|00001|ovsdb_idl|WARN|Logical_Router table in OVN_Northbound database lacks policies column (database needs upgrade?) 2019-11-29T02:51:52Z|00002|ovsdb_idl|WARN|OVN_Northbound database lacks Logical_Router_Policy table (database needs upgrade?) 2019-11-29T02:51:52Z|00003|ovsdb_idl|WARN|Logical_Switch_Port table in OVN_Northbound database lacks ha_chassis_group column (database needs upgrade?) switch 9247da26-8a8f-4f43-aeb9-32903bfd4d64 (ls1) [root@ibm-x3650m5-03 bz1775795]# ovn-nbctl ls-add ls2 2019-11-29T02:51:58Z|00002|ovsdb_idl|WARN|Logical_Router table in OVN_Northbound database lacks policies column (database needs upgrade?) 2019-11-29T02:51:58Z|00003|ovsdb_idl|WARN|OVN_Northbound database lacks Logical_Router_Policy table (database needs upgrade?) 2019-11-29T02:51:58Z|00004|ovsdb_idl|WARN|Logical_Switch_Port table in OVN_Northbound database lacks ha_chassis_group column (database needs upgrade?) [root@ibm-x3650m5-03 bz1775795]# ovn-nbctl show 2019-11-29T02:51:59Z|00001|ovsdb_idl|WARN|Logical_Router table in OVN_Northbound database lacks policies column (database needs upgrade?) 2019-11-29T02:51:59Z|00002|ovsdb_idl|WARN|OVN_Northbound database lacks Logical_Router_Policy table (database needs upgrade?) 2019-11-29T02:51:59Z|00003|ovsdb_idl|WARN|Logical_Switch_Port table in OVN_Northbound database lacks ha_chassis_group column (database needs upgrade?) switch 1f869058-eabf-4a21-a514-59bbe125320b (ls2) switch 9247da26-8a8f-4f43-aeb9-32903bfd4d64 (ls1) <==== create ls2 on node B [root@dell-per740-12 bz1775795]# cp 5.16.0/ovn-* /usr/share/openvswitch/ cp: overwrite '/usr/share/openvswitch/ovn-nb.ovsschema'? y cp: overwrite '/usr/share/openvswitch/ovn-sb.ovsschema'? y [root@dell-per740-12 bz1775795]# pcs cluster start 11.1.1.2 11.1.1.2: Starting Cluster... <==== copy 5.16.0 schema files and restart node A [root@dell-per740-12 bz1775795]# pcs status Cluster name: my_cluster Cluster Summary: * Stack: corosync * Current DC: 11.1.1.11 (version 2.0.3-2.el8-2c9cea563e) - partition with quorum * Last updated: Thu Nov 28 21:52:44 2019 * Last change: Thu Nov 28 21:52:31 2019 by root via crm_attribute on 11.1.1.2 * 2 nodes configured * 3 resource instances configured Node List: * Online: [ 11.1.1.2 11.1.1.11 ] Full List of Resources: * ip-11.1.1.50 (ocf::heartbeat:IPaddr2): Started 11.1.1.2 * Clone Set: ovndb_servers-clone [ovndb_servers] (promotable): * Masters: [ 11.1.1.2 ] * Slaves: [ 11.1.1.11 ] Failed Resource Actions: * ovndb_servers_monitor_30000 on 11.1.1.11 'not running' (7): call=39, status='complete', exitreason='', last-rc-change='2019-11-28 21:52:31 -05:00', queued=0ms, exec=57ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/disabled [root@dell-per740-12 bz1775795]# ovn-nbctl show switch 9247da26-8a8f-4f43-aeb9-32903bfd4d64 (ls1) <==== ls2 not showed 2019-11-29T02:52:30.550Z|00015|replication|INFO|Schema version mismatch, OVN_Northbound not replicated 2019-11-29T02:52:30.551Z|00016|replication|WARN|Nothing to replicate. <==== message in ovsdb-nb log [root@dell-per740-12 bz1775795]# rpm -qa | grep -E "openvswitch|ovn" openvswitch2.12-2.12.0-4.el8fdp.x86_64 ovn2.11-2.11.1-20.el8fdp.x86_64 ovn2.11-host-2.11.1-20.el8fdp.x86_64 ovn2.11-central-2.11.1-20.el8fdp.x86_64 openvswitch-selinux-extra-policy-1.0-19.el8fdp.noarch Verified on openvswitch2.12.0-8: [root@dell-per740-12 bz1775795]# pcs status Cluster name: my_cluster Cluster Summary: * Stack: corosync * Current DC: 11.1.1.2 (version 2.0.3-2.el8-2c9cea563e) - partition with quorum * Last updated: Thu Nov 28 21:32:44 2019 * Last change: Thu Nov 28 21:32:26 2019 by root via crm_attribute on 11.1.1.2 * 2 nodes configured * 3 resource instances configured Node List: * Online: [ 11.1.1.2 11.1.1.11 ] Full List of Resources: * ip-11.1.1.50 (ocf::heartbeat:IPaddr2): Started 11.1.1.2 * Clone Set: ovndb_servers-clone [ovndb_servers] (promotable): * Masters: [ 11.1.1.2 ] * Slaves: [ 11.1.1.11 ] Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/disabled [root@dell-per740-12 bz1775795]# [root@dell-per740-12 bz1775795]# [root@dell-per740-12 bz1775795]# ovn-nbctl ls-add ls1 2019-11-29T02:32:50Z|00002|ovsdb_idl|WARN|Logical_Router table in OVN_Northbound database lacks policies column (database needs upgrade?) 2019-11-29T02:32:50Z|00003|ovsdb_idl|WARN|OVN_Northbound database lacks Logical_Router_Policy table (database needs upgrade?) 2019-11-29T02:32:50Z|00004|ovsdb_idl|WARN|Logical_Switch_Port table in OVN_Northbound database lacks ha_chassis_group column (database needs upgrade?) [root@dell-per740-12 bz1775795]# ovn-nbctl show 2019-11-29T02:32:52Z|00001|ovsdb_idl|WARN|Logical_Router table in OVN_Northbound database lacks policies column (database needs upgrade?) 2019-11-29T02:32:52Z|00002|ovsdb_idl|WARN|OVN_Northbound database lacks Logical_Router_Policy table (database needs upgrade?) 2019-11-29T02:32:52Z|00003|ovsdb_idl|WARN|Logical_Switch_Port table in OVN_Northbound database lacks ha_chassis_group column (database needs upgrade?) switch 63d78f1d-ed98-410d-ae11-c49a72325ccd (ls1) [root@dell-per740-12 bz1775795]# pcs cluster stop 11.1.1.2 11.1.1.2: Stopping Cluster (pacemaker)... 11.1.1.2: Stopping Cluster (corosync)... <==== stop A [root@ibm-x3650m5-03 bz1775795]# pcs status Cluster name: my_cluster Cluster Summary: * Stack: corosync * Current DC: 11.1.1.11 (version 2.0.3-2.el8-2c9cea563e) - partition with quorum * Last updated: Thu Nov 28 21:33:13 2019 * Last change: Thu Nov 28 21:33:02 2019 by root via crm_attribute on 11.1.1.11 * 2 nodes configured * 3 resource instances configured Node List: * Online: [ 11.1.1.11 ] * OFFLINE: [ 11.1.1.2 ] Full List of Resources: * ip-11.1.1.50 (ocf::heartbeat:IPaddr2): Started 11.1.1.11 * Clone Set: ovndb_servers-clone [ovndb_servers] (promotable): * Masters: [ 11.1.1.11 ] * Stopped: [ 11.1.1.2 ] Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/disabled [root@ibm-x3650m5-03 bz1775795]# ovn-nbctl show 2019-11-29T02:33:20Z|00001|ovsdb_idl|WARN|Logical_Router table in OVN_Northbound database lacks policies column (database needs upgrade?) 2019-11-29T02:33:20Z|00002|ovsdb_idl|WARN|OVN_Northbound database lacks Logical_Router_Policy table (database needs upgrade?) 2019-11-29T02:33:20Z|00003|ovsdb_idl|WARN|Logical_Switch_Port table in OVN_Northbound database lacks ha_chassis_group column (database needs upgrade?) switch 63d78f1d-ed98-410d-ae11-c49a72325ccd (ls1) [root@ibm-x3650m5-03 bz1775795]# ovn-nbctl ls-add ls2 2019-11-29T02:33:23Z|00002|ovsdb_idl|WARN|Logical_Router table in OVN_Northbound database lacks policies column (database needs upgrade?) 2019-11-29T02:33:23Z|00003|ovsdb_idl|WARN|OVN_Northbound database lacks Logical_Router_Policy table (database needs upgrade?) 2019-11-29T02:33:23Z|00004|ovsdb_idl|WARN|Logical_Switch_Port table in OVN_Northbound database lacks ha_chassis_group column (database needs upgrade?) [root@ibm-x3650m5-03 bz1775795]# ovn-nbctl show 2019-11-29T02:33:25Z|00001|ovsdb_idl|WARN|Logical_Router table in OVN_Northbound database lacks policies column (database needs upgrade?) 2019-11-29T02:33:25Z|00002|ovsdb_idl|WARN|OVN_Northbound database lacks Logical_Router_Policy table (database needs upgrade?) 2019-11-29T02:33:25Z|00003|ovsdb_idl|WARN|Logical_Switch_Port table in OVN_Northbound database lacks ha_chassis_group column (database needs upgrade?) switch 63d78f1d-ed98-410d-ae11-c49a72325ccd (ls1) switch 4809ecee-d9da-49c0-9818-4dbe202ce511 (ls2) <=== add ls2 on node B [root@dell-per740-12 bz1775795]# cp 5.16.0/ovn-* /usr/share/openvswitch/ cp: overwrite '/usr/share/openvswitch/ovn-nb.ovsschema'? y cp: overwrite '/usr/share/openvswitch/ovn-sb.ovsschema'? y [root@dell-per740-12 bz1775795]# pcs cluster start 11.1.1.2 11.1.1.2: Starting Cluster... <==== copy 5.16.0 schema files and restart node A [root@dell-per740-12 bz1775795]# pcs status Cluster name: my_cluster Cluster Summary: * Stack: corosync * Current DC: 11.1.1.11 (version 2.0.3-2.el8-2c9cea563e) - partition with quorum * Last updated: Thu Nov 28 21:34:24 2019 * Last change: Thu Nov 28 21:33:58 2019 by root via crm_attribute on 11.1.1.2 * 2 nodes configured * 3 resource instances configured Node List: * Online: [ 11.1.1.2 11.1.1.11 ] Full List of Resources: * ip-11.1.1.50 (ocf::heartbeat:IPaddr2): Started 11.1.1.2 * Clone Set: ovndb_servers-clone [ovndb_servers] (promotable): * Masters: [ 11.1.1.2 ] * Slaves: [ 11.1.1.11 ] Failed Resource Actions: * ovndb_servers_monitor_30000 on 11.1.1.11 'not running' (7): call=39, status='complete', exitreason='', last-rc-change='2019-11-28 21:33:58 -05:00', queued=0ms, exec=57ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/disabled [root@dell-per740-12 bz1775795]# ovn-nbctl show switch 63d78f1d-ed98-410d-ae11-c49a72325ccd (ls1) switch 4809ecee-d9da-49c0-9818-4dbe202ce511 (ls2) <=== ls can be showed 2019-11-29T02:33:57.792Z|00015|replication|INFO|Schema version mismatch, checking if OVN_Northbound caa n still be replicated or not. 2019-11-29T02:33:57.793Z|00016|replication|INFO|OVN_Northbound can be replicated. 2019-11-29T02:33:57.793Z|00017|replication|INFO|Monitor request received. Resetting the database <=== log for ovsdb-nb [root@dell-per740-12 bz1775795]# rpm -qa | grep -E "openvswitch|ovn" openvswitch2.12-2.12.0-8.el8fdp.x86_64 ovn2.11-2.11.1-20.el8fdp.x86_64 ovn2.11-host-2.11.1-20.el8fdp.x86_64 ovn2.11-central-2.11.1-20.el8fdp.x86_64 openvswitch-selinux-extra-policy-1.0-19.el8fdp.noarch set VERIFIED
failed to try to use schema 5.14.0 and schema 5.16.0 to test because of following reason: 5.14.0: "load_balancer": {"type": {"key": {"type": "uuid", "refTable": "Load_Balancer", "refType": "strong"}, "min": 0, "max": "unlimited"}}, 5.16.0: "load_balancer": {"type": {"key": {"type": "uuid", "refTable": "Load_Balancer", "refType": "weak"}, "min": 0, "max": "unlimited"}}, replicating would fail in such circumstance
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:4207