RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1684363 - [ovn_cluster]master node can't be up after restart openvswitch
Summary: [ovn_cluster]master node can't be up after restart openvswitch
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.6
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Dumitru Ceara
QA Contact: haidong li
URL:
Whiteboard:
Depends On:
Blocks: 1723291
TreeView+ depends on / blocked
 
Reported: 2019-03-01 05:08 UTC by haidong li
Modified: 2019-09-30 09:33 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1723291 (view as bug list)
Environment:
Last Closed: 2019-09-30 09:33:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description haidong li 2019-03-01 05:08:59 UTC
Description of problem:
master node can't be up after restart openvswitch

Version-Release number of selected component (if applicable):
[root@hp-dl380pg8-16 ~]# uname -a
Linux hp-dl380pg8-16.rhts.eng.pek2.redhat.com 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@hp-dl380pg8-16 ~]# rpm -qa |grep openvswitch
kernel-kernel-networking-openvswitch-ovn_ha-1.0-30.noarch
openvswitch-ovn-common-2.9.0-97.el7fdp.x86_64
openvswitch-2.9.0-97.el7fdp.x86_64
openvswitch-selinux-extra-policy-1.0-10.el7fdp.noarch
openvswitch-ovn-host-2.9.0-97.el7fdp.x86_64
openvswitch-ovn-central-2.9.0-97.el7fdp.x86_64

How reproducible:
everytime

Steps to Reproduce:
1.set up cluster with 3 nodes as ovndb_servers
2.restart openvswitch on master node

[root@hp-dl388g8-02 ~]# pcs status
Cluster name: my_cluster

WARNINGS:
Corosync and pacemaker node names do not match (IPs used in setup?)

Stack: corosync
Current DC: hp-dl380pg8-16.rhts.eng.pek2.redhat.com (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Thu Feb 28 21:18:24 2019
Last change: Thu Feb 28 09:02:19 2019 by root via crm_attribute on hp-dl388g8-02.rhts.eng.pek2.redhat.com

3 nodes configured
4 resources configured

Online: [ hp-dl380pg8-16.rhts.eng.pek2.redhat.com hp-dl388g8-02.rhts.eng.pek2.redhat.com hp-dl388g8-19.rhts.eng.pek2.redhat.com ]

Full list of resources:

 ip-70.0.0.50    (ocf::heartbeat:IPaddr2):    Started hp-dl388g8-02.rhts.eng.pek2.redhat.com
 Master/Slave Set: ovndb_servers-master [ovndb_servers]
     Masters: [ hp-dl388g8-02.rhts.eng.pek2.redhat.com ]
     Slaves: [ hp-dl380pg8-16.rhts.eng.pek2.redhat.com hp-dl388g8-19.rhts.eng.pek2.redhat.com ]

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[root@hp-dl388g8-02 ~]# systemctl restart openvswitch
[root@hp-dl388g8-02 ~]# pcs status
Cluster name: my_cluster

WARNINGS:
Corosync and pacemaker node names do not match (IPs used in setup?)

Stack: corosync
Current DC: hp-dl380pg8-16.rhts.eng.pek2.redhat.com (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
Last updated: Thu Feb 28 21:30:59 2019
Last change: Thu Feb 28 21:19:00 2019 by root via crm_attribute on hp-dl388g8-19.rhts.eng.pek2.redhat.com

3 nodes configured
4 resources configured

Online: [ hp-dl380pg8-16.rhts.eng.pek2.redhat.com hp-dl388g8-02.rhts.eng.pek2.redhat.com hp-dl388g8-19.rhts.eng.pek2.redhat.com ]

Full list of resources:

 ip-70.0.0.50    (ocf::heartbeat:IPaddr2):    Started hp-dl388g8-19.rhts.eng.pek2.redhat.com
 Master/Slave Set: ovndb_servers-master [ovndb_servers]
     Masters: [ hp-dl388g8-19.rhts.eng.pek2.redhat.com ]
     Slaves: [ hp-dl380pg8-16.rhts.eng.pek2.redhat.com ]
     Stopped: [ hp-dl388g8-02.rhts.eng.pek2.redhat.com ]

Failed Actions:
* ovndb_servers_demote_0 on hp-dl388g8-02.rhts.eng.pek2.redhat.com 'not running' (7): call=42, status=complete, exitreason='',
    last-rc-change='Thu Feb 28 21:18:59 2019', queued=0ms, exec=40ms


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Actual results:
the node can't be back after restart openvswitch

Expected results:
the node can be back after restart openvswitch

Additional info:


Note You need to log in before you can comment on or make changes to this bug.