Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1860347

Summary: OVN DBs not started after controller reboot - pacemaker timeout
Product: Red Hat OpenStack Reporter: Eduardo Olivares <eolivare>
Component: puppet-pacemakerAssignee: RHOS Maint <rhos-maint>
Status: CLOSED NOTABUG QA Contact: nlevinki <nlevinki>
Severity: high Docs Contact:
Priority: medium    
Version: 16.1 (Train)CC: jjoyce, jschluet, lmiccini, michele, oblaut, slinaber, tvignaud
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-15 14:27:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Eduardo Olivares 2020-07-24 10:54:40 UTC
Description of problem:
This issue has been reproduced with this job: https://rhos-ci-staging-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/DFG-network-networking-ovn-16.1_director-rhel-virthost-3cont_2comp_3net-ipv4-geneve-tobiko-neutron/1

It has been reproduced with the tobiko test test_reboot_controller_main_vip

controller-0 is rebooted here:
2020-07-23 17:01:41.139 398009 INFO tobiko.tests.faults.ha.cloud_disruptions [-] disrupt exec: sudo chmod o+w /proc/sysrq-trigger;sudo echo b > /proc/sysrq-trigger on server: controller-0

You can see from controller-0 logs that it started rebooting then:
Jul 23 17:01:12 controller-0 systemd[1]: Started heat_api healthcheck.
Jul 23 15:16:54 controller-0 kernel: Linux version 4.18.0-193.13.2.el8_2.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)) #1 SMP Mon Jul 13 23:17:28 UTC 2020
Jul 23 15:16:54 controller-0 kernel: Command line: BOOT_IMAGE=(hd0,msdos2)/boot/vmlinuz-4.18.0-193.13.2.el8_2.x86_64 root=UUID=b16ac6f8-e4ff-4a93-a1e2-2e852899fc68 ro console=ttyS0 console=ttyS0,115200n81 no_timer_check crashkernel=auto rhgb quiet

After more than 13 minutes, the test fails because OVN DB resource from controller-0 is not started:
2020-07-23 17:13:24.320 398009 INFO tobiko.tripleo.pacemaker [-] Retrying pacemaker resource checks attempt 359 of 360
...
        * ovn-dbs-bundle-0      (ocf::ovn:ovndb-servers):       Stopped controller-0
        * ovn-dbs-bundle-1      (ocf::ovn:ovndb-servers):       Master controller-1
        * ovn-dbs-bundle-2      (ocf::ovn:ovndb-servers):       Slave controller-2



This is an error shown at controller-0's /var/log/pacemaker/pacemaker.log:
Jul 23 17:06:21 controller-0 pacemaker-controld  [2843] (lrmd_api_get_metadata_params)  error: Failed to retrieve meta-data for ocf:ovn:ovndb-servers
Jul 23 17:06:21 controller-0 pacemaker-controld  [2843] (build_operation_update)        warning: Failed to get metadata for ovndb_servers (ocf:ovn:ovndb-servers)
Jul 23 17:06:21 controller-0 pacemaker-controld  [2843] (process_lrm_event)     error: Result of start operation for ovndb_servers on ovn-dbs-bundle-0: Timed Out | call=8 key=ovndb_servers_start_0 timeout=200000ms




Version-Release number of selected component (if applicable):
RHOS-16.1-RHEL-8-20200714.n.0
[root@controller-0 ~]# pcs --version
0.10.4


How reproducible:
I don't think it is reproduced often. During this job, a previous test rebooted all controller nodes and after that, all pcs resources were up and running.
This is a WIP job (not in production Jenkins yet), whose only different with live OVN OSP16.1 tobiko job is that it uses composable roles (networker nodes). I think it was a coincidence that this was reproduced here.

Steps to Reproduce:
Alternatives:
Run job https://rhos-ci-staging-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/DFG-network-networking-ovn-16.1_director-rhel-virthost-3cont_2comp_3net-ipv4-geneve-tobiko-neutron
Hard reboot controller node with: sudo chmod o+w /proc/sysrq-trigger;sudo echo b > /proc/sysrq-trigger

Comment 1 Luca Miccini 2020-07-27 08:30:39 UTC
I can't reproduce this behavior by rebooting the ovn master controller following the provided steps on RHOS-16.1-RHEL-8-20200723.n.0. 
I'll check the logs of your job and see if I spot anything.

Comment 2 Luca Miccini 2020-07-28 07:19:41 UTC
+ . kolla_extend_start
++ [[ ! -d /var/log/kolla/openvswitch ]]
+++ stat -c %a /var/log/kolla/openvswitch
Running command: '/usr/sbin/pacemaker_remoted'
++ [[ 755 != \7\5\5 ]]
+ echo 'Running command: '\''/usr/sbin/pacemaker_remoted'\'''
+ exec /usr/sbin/pacemaker_remoted
(crm_add_logfile)       error: Directory '/var/log/pacemaker' does not exist: logging to '/var/log/pacemaker/pacemaker.log' is disabled
(crm_log_init)  info: Changed active directory to /var/lib/pacemaker/cores
(main)  notice: Starting Pacemaker remote executor
(qb_ipcs_us_publish)    info: server name: lrmd
(pcmk__init_tls_dh)     info: Generating Diffie-Hellman parameters with 2048-bit prime for TLS
(qb_ipcs_us_publish)    info: server name: cib_ro
(qb_ipcs_us_publish)    info: server name: cib_rw
(qb_ipcs_us_publish)    info: server name: cib_shm
(qb_ipcs_us_publish)    info: server name: attrd
(qb_ipcs_us_publish)    info: server name: stonith-ng
(qb_ipcs_us_publish)    info: server name: crmd
(main)  notice: Pacemaker remote executor successfully started and accepting connections
(crm_remote_accept)     info: New remote connection from ::ffff:172.17.1.40
(lrmd_remote_listen)    info: Remote client pending authentication | 0x55cf2e2f4a00 id: 83392302-2a18-474f-8446-c07479239897
(remoted__read_handshake_data)  notice: Remote client connection accepted
(process_lrmd_get_rsc_info)     info: Agent information for 'ovndb_servers' not in cache
(process_lrmd_get_rsc_info)     info: Agent information for 'ovndb_servers:0' not in cache
(process_lrmd_rsc_register)     info: Cached agent information for 'ovndb_servers'
(log_execute)   info: executing - rsc:ovndb_servers action:start call_id:8
(child_timeout_callback)        warning: ovndb_servers_start_0 process (PID 60) timed out
(operation_finished)    warning: ovndb_servers_start_0:60 - timed out after 200000ms
2020-07-23T17:06:20.715908867+00:00 stderr F (operation_finished)       notice: ovndb_servers_start_0:60:stderr [ 2020-07-23T17:03:02Z|00001|unixctl|WARN|failed to connect to /var/run/ovn/ovnsb_db.ctl ]
2020-07-23T17:06:20.715923391+00:00 stderr F (operation_finished)       notice: ovndb_servers_start_0:60:stderr [ ovn-appctl: cannot connect to "/var/run/ovn/ovnsb_db.ctl" (Connection refused) ]


Jul 23 17:02:58 controller-0 podman[7585]: 2020-07-23 17:02:58.301150071 +0000 UTC m=+1.089836261 container exec 6106274030d44fa9fd2f14e63c70dd6505b4de4b8fc26890dde32ec8ad29d5f1 (image=undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-ovn-northd:16.1_20200714.1, name=ovn-dbs-bundle-podman-0)
Jul 23 17:02:58 controller-0 pacemaker-controld[2843]:  notice: Result of monitor operation for ovn-dbs-bundle-podman-0 on controller-0: 0 (ok)
Jul 23 17:03:00 controller-0 pacemaker-controld[2843]:  notice: Result of monitor operation for ovn-dbs-bundle-0 on controller-0: 0 (ok)
Jul 23 17:03:00 controller-0 pacemaker-controld[2843]:  notice: Result of probe operation for ovndb_servers on ovn-dbs-bundle-0: 7 (not running)
Jul 23 17:06:21 controller-0 pacemaker-controld[2843]:  error: Result of start operation for ovndb_servers on ovn-dbs-bundle-0: Timed Out
Jul 23 17:06:21 controller-0 pacemaker-controld[2843]:  notice: ovn-dbs-bundle-0-ovndb_servers_start_0:8 [ 2020-07-23T17:03:02Z|00001|unixctl|WARN|failed to connect to /var/run/ovn/ovnsb_db.ctl\novn-appctl: cannot connect to "/var/run/ovn/ovnsb_db.ctl" (Connection refused)\n2020-07-23T17:03:03Z|00001|unixctl|WARN|failed to connect to /var/run/ovn/ovnsb_db.ctl\novn-appctl: cannot connect to "/var/run/ovn/ovnsb_db.ctl" (Connection refused)\n2020-07-23T17:03:03Z|00001|unixctl|WARN|failed to connect to /var/run/ovn/ovnsb_db.ctl\novn-appctl: cannot connect to "/var
Jul 23 17:06:21 controller-0 pacemaker-attrd[2840]:  notice: Setting fail-count-ovndb_servers#start_0[ovn-dbs-bundle-0]: (unset) -> INFINITY
Jul 23 17:06:21 controller-0 pacemaker-attrd[2840]:  notice: Setting last-failure-ovndb_servers#start_0[ovn-dbs-bundle-0]: (unset) -> 1595523981


seems like:
1. container starts -> ok
2. pacemaker_remote starts -> ok
3. ovndb_servers tries to start -> times out


I only found:

Jul 23 17:03:10 controller-0 pacemaker-controld  [2843] (crm_procfs_pid_of)     info: Found pacemaker-based active as process 2837
Jul 23 17:03:10 controller-0 pacemaker-controld  [2843] (throttle_check_thresholds)     notice: High CPU load detected: 15.010000
Jul 23 17:03:10 controller-0 pacemaker-controld  [2843] (throttle_send_command)         info: New throttle mode: high load (was undetermined)

maybe a genuine timeout because of the load? I am not sure.