Bug 1979784 - "Error: non zero exit code: 1: OCI runtime error" is occurring unexpectedly.
Summary: "Error: non zero exit code: 1: OCI runtime error" is occurring unexpectedly.
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 16.1 (Train)
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: OSP DFG:Compute
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-07 04:45 UTC by youngcheol
Modified: 2024-10-01 18:54 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-02 13:00:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-5923 0 None None None 2022-08-11 10:57:19 UTC

Description youngcheol 2021-07-07 04:45:39 UTC
Description of problem:

- Customer complaining for the error log occurring.
- "Error: non zero exit code: 1: OCI runtime error" is occurring unexpectedly.


Version-Release number of selected component (if applicable):

Red Hat OpenStack Platform release 16.1.6 GA (Train)

python3-novaclient-15.1.1-1.20201113230831.79959ab.el8ost.noarch
puppet-nova-15.6.1-1.20201114010908.51a6857.el8ost.noarch

4b46950e55de  undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:16.1_20210430.1               kolla_start  5 weeks ago  Up 5 weeks ago         nova_compute
56e0d5fa9f29  undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:16.1_20210430.1               kolla_start  5 weeks ago  Up 5 weeks ago         nova_migration_target
7599186c70e4  undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-libvirt:16.1_20210430.1               kolla_start  5 weeks ago  Up 5 weeks ago         nova_libvirt
bd6edb9d24dc  undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-libvirt:16.1_20210430.1               kolla_start  5 weeks ago  Up 5 weeks ago         nova_virtlogd


How reproducible:
- regardless working below error is logging in compute nodes messages log.

Actual results:

Jul  4 05:17:14 compute-0 healthcheck_nova_migration_target[738296]: There is no sshd process listening on port(s) 2022 in the container
Jul  4 05:17:14 compute-0 podman[738296]: 2021-07-04 05:17:14.180288774 +0000 UTC m=+0.462346218 container exec 56e0d5fa9f29c6137202b54158402f00a2fb03dc52488556c24f129289bbdebb (image=undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-nova-compute:16.1_20210430.1, name=nova_migration_target)
Jul  6 01:09:14 compute-0 healthcheck_nova_migration_target[264143]: Error: non zero exit code: 1: OCI runtime error
Jul  6 01:09:14 compute-0 systemd[1]: tripleo_nova_migration_target_healthcheck.service: Main process exited, code=exited, status=1/FAILURE
Jul  6 01:09:14 compute-0 systemd[1]: tripleo_nova_migration_target_healthcheck.service: Failed with result 'exit-code'.
Jul  6 01:09:14 compute-0 systemd[1]: Failed to start nova_migration_target healthcheck.


Expected results:
- wants to know why it is occuring and possible to ignore them.

Additional info:
 - no specific logs in nova-compute logs.

2021-07-06 01:08:53.374 7 DEBUG oslo_service.periodic_task [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-07-06 01:08:56.385 7 DEBUG oslo_service.periodic_task [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-07-06 01:08:58.615 7 DEBUG oslo_service.periodic_task [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-07-06 01:08:58.648 7 DEBUG nova.compute.manager [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Triggering sync for uuid 7dab8938-30cc-4139-a8d4-1ff3d13e07bd _sync_power_states /usr/lib/python3.6/site-packages/nova/compute/manager.py:8548
2021-07-06 01:08:58.649 7 DEBUG oslo_concurrency.lockutils [-] Lock "7dab8938-30cc-4139-a8d4-1ff3d13e07bd" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-07-06 01:08:58.701 7 DEBUG oslo_concurrency.lockutils [-] Lock "7dab8938-30cc-4139-a8d4-1ff3d13e07bd" released by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.052s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-07-06 01:09:05.382 7 DEBUG oslo_service.periodic_task [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-07-06 01:09:05.383 7 DEBUG nova.compute.manager [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.6/site-packages/nova/compute/manager.py:8755
2021-07-06 01:09:06.382 7 DEBUG oslo_service.periodic_task [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-07-06 01:09:23.381 7 DEBUG oslo_service.periodic_task [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-07-06 01:09:25.398 7 DEBUG oslo_service.periodic_task [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-07-06 01:09:38.382 7 DEBUG oslo_service.periodic_task [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-07-06 01:09:38.382 7 DEBUG nova.compute.manager [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:8066
2021-07-06 01:09:38.383 7 DEBUG nova.compute.manager [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:8070
2021-07-06 01:09:38.693 7 DEBUG oslo_concurrency.lockutils [req-29e8ea12-fed1-476d-b814-5f7e295feb9b - - - - -] Acquired lock "refresh_cache-7dab8938-30cc-4139-a8d4-1ff3d13e07bd" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:265

 - it looks match with "https://bugs.launchpad.net/tripleo/+bug/1863635" to me.


Note You need to log in before you can comment on or make changes to this bug.