Bug 1566559 - [3.9] logspam : CONTAINER_EXITED errors in node logs
Summary: [3.9] logspam : CONTAINER_EXITED errors in node logs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.9.z
Assignee: Seth Jennings
QA Contact: Weinan Liu
URL:
Whiteboard:
Depends On: 1566197
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-12 14:02 UTC by Seth Jennings
Modified: 2018-05-17 06:44 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of: 1566197
Environment:
Last Closed: 2018-05-17 06:43:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1566 0 None None None 2018-05-17 06:44:33 UTC

Description Seth Jennings 2018-04-12 14:02:43 UTC
+++ This bug was initially created as a clone of Bug #1566197 +++

Description of problem:
Node logs have lot of these errors, its all the build containers (git clone, sti etc). There is no issue as such, just logs are spammed.

Apr 11 13:17:04 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 13:17:04.589338    7645 logs.go:383] Container "4fbf541ed1900c4670216a6a1ecf752cd07ac430f5547c5497fbc4b78e564b78" is not running (state="CONTAINER_EXITED")
Apr 11 14:02:18 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 14:02:18.168502    7645 logs.go:383] Container "dba4c535666d05310889965418592727047320743a233e226e2266b399836150" is not running (state="CONTAINER_EXITED")
Apr 11 14:02:41 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 14:02:41.342645    7645 logs.go:383] Container "a946289b36fe3c375c29dce020005424f3b980237892253d42b8bd8bfb595756" is not running (state="CONTAINER_EXITED")
Apr 11 14:02:49 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 14:02:49.907317    7645 logs.go:383] Container "e1d6014330e7422c03ae6db501d4fb296a4501355517cb60e2f910f54741361d" is not running (state="CONTAINER_EXITED")
Apr 11 14:03:09 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 14:03:09.494930    7645 logs.go:383] Container "e9d83032c9e51d1aaa1ba05a5300fc119f3527f94cb3ce7fc521c795e2e58d4d" is not running (state="CONTAINER_EXITED")
Apr 11 14:03:14 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 14:03:14.796883    7645 logs.go:383] Container "62ae6a9316b1e86492fb06b9156e652fb2131c88ea242c6efabcfe4b45ec1406" is not running (state="CONTAINER_EXITED")
Apr 11 14:04:09 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 14:04:09.201581    7645 logs.go:383] Container "c9217a9af025eb238c1916d5c55bb338ed933b3a8124826d87ad48f100aa2b0c" is not running (state="CONTAINER_EXITED")
Apr 11 14:04:29 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 14:04:29.423235    7645 logs.go:383] Container "f2e89bfcc93a3359f071f24c608a94700da6611df627327c86fb4f224bf88d3a" is not running (state="CONTAINER_EXITED")
Apr 11 14:04:38 ip-172-31-46-124.us-west-2.compute.internal atomic-openshift-node[7645]: E0411 14:04:38.311603    7645 logs.go:383] Container "289c7595a39dff41a7f4adfc9af3d673e9601395f306d911c6fc8987bb6765cf" is not running (state="CONTAINER_EXITED")

Version-Release number of selected component (if applicable):
openshift v3.9.19
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.16

How reproducible:
Rub builds 

Steps to Reproduce:
1. oc new-project test
2. oc new-app --template=django-psql-example

Actual results:
Node logs are filled with Errors

Expected results:
These should not be errors since this is normal.

Additional info:

--- Additional comment from Seth Jennings on 2018-04-11 14:23:26 EDT ---

Kube PR:
https://github.com/kubernetes/kubernetes/pull/62416

Comment 1 Seth Jennings 2018-04-12 14:09:08 UTC
OSE 3.9 PR:
https://github.com/openshift/ose/pull/1211

Comment 8 Weinan Liu 2018-05-03 05:51:41 UTC
Issue fixed.

# oc version
oc v3.9.27
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://ip-172-18-13-135.ec2.internal:8443
openshift v3.9.27
kubernetes v1.9.1+a0ce1bc657

# uname -a
Linux ip-172-18-13-135.ec2.internal 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

Node logs:
[root@ip-172-18-13-135 ~]# systemctl status atomic-openshift-node -l
● atomic-openshift-node.service - OpenShift Node
   Loaded: loaded (/etc/systemd/system/atomic-openshift-node.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/atomic-openshift-node.service.d
           └─openshift-sdn-ovs.conf
   Active: active (running) since Wed 2018-05-02 21:59:51 EDT; 3h 49min ago
     Docs: https://github.com/openshift/origin
 Main PID: 14328 (openshift)
   Memory: 78.3M
   CGroup: /system.slice/atomic-openshift-node.service
           └─14328 /usr/bin/openshift start node --config=/etc/origin/node/node-config.yaml --loglevel=5

May 03 01:49:35 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:35.651492   14328 helpers.go:827] eviction manager: observations: signal=memory.available, available: 1279452Ki, capacity: 3687560Ki, time: 2018-05-03 01:49:20.444124789 -0400 EDT m=+13772.290124065
May 03 01:49:35 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:35.651933   14328 helpers.go:827] eviction manager: observations: signal=nodefs.available, available: 14235740Ki, capacity: 18422Mi, time: 2018-05-03 01:49:20.444124789 -0400 EDT m=+13772.290124065
May 03 01:49:35 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:35.651957   14328 helpers.go:827] eviction manager: observations: signal=nodefs.inodesFree, available: 9399560, capacity: 9Mi, time: 2018-05-03 01:49:20.444124789 -0400 EDT m=+13772.290124065
May 03 01:49:35 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:35.651996   14328 helpers.go:827] eviction manager: observations: signal=imagefs.available, available: 10912256Ki, capacity: 12492Mi, time: 2018-05-03 01:49:20.444124789 -0400 EDT m=+13772.290124065
May 03 01:49:35 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:35.652007   14328 helpers.go:829] eviction manager: observations: signal=allocatableMemory.available, available: 3394848Ki, capacity: 3687560Ki
May 03 01:49:35 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:35.652029   14328 eviction_manager.go:325] eviction manager: no resources are starved
May 03 01:49:35 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:35.678152   14328 generic.go:183] GenericPLEG: Relisting
May 03 01:49:36 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:36.239194   14328 fs.go:406] got devicemapper fs capacity stats: capacity: 13098811392 free: 11174150144 available: 11174150144:
May 03 01:49:36 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:36.683846   14328 generic.go:183] GenericPLEG: Relisting
May 03 01:49:36 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:36.924850   14328 kubelet.go:1924] SyncLoop (housekeeping)
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.364977   14328 prober.go:165] HTTP-Probe Host: https://10.129.0.2, Port: 8443, Path: /healthz
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.365003   14328 prober.go:168] HTTP-Probe Headers: map[]
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.372418   14328 http.go:96] Probe succeeded for https://10.129.0.2:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Date:[Thu, 03 May 2018 05:49:37 GMT] Content-Type:[text/plain; charset=utf-8] Content-Length:[2]] 0xc4230814c0 2 [] false false map[] 0xc4211e4100 0xc42336f810}
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.372455   14328 prober.go:118] Readiness probe for "webconsole-55dd868cdf-cqqsz_openshift-web-console(3a33fc3c-4e76-11e8-8584-0e2b0f5d1944):webconsole" succeeded
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.542795   14328 prober.go:165] HTTP-Probe Host: https://10.129.0.6, Port: 8443, Path: /healthz
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.542822   14328 prober.go:168] HTTP-Probe Headers: map[]
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.564854   14328 http.go:96] Probe succeeded for https://10.129.0.6:8443/healthz, Response: {200 OK 200 HTTP/2.0 2 0 map[Content-Length:[2] Date:[Thu, 03 May 2018 05:49:37 GMT] Content-Type:[text/plain; charset=utf-8]] 0xc4210ea5c0 2 [] false false map[] 0xc4211e4300 0xc42336fa20}
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.564902   14328 prober.go:118] Readiness probe for "apiserver-b729m_openshift-template-service-broker(ed657cbe-4e76-11e8-8584-0e2b0f5d1944):c" succeeded
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.583122   14328 log_handler.go:32] AWS API Send: ec2metadata GetMetadata &{GetMetadata GET /meta-data/network/interfaces/macs <nil> <nil>} <nil>
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.583144   14328 log_handler.go:37] AWS API ValidateResponse: ec2metadata GetMetadata &{GetMetadata GET /meta-data/network/interfaces/macs <nil> <nil>} <nil> 200 OK
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.584480   14328 log_handler.go:32] AWS API Send: ec2metadata GetMetadata &{GetMetadata GET /meta-data/network/interfaces/macs/0e:2b:0f:5d:19:44/local-ipv4s <nil> <nil>} <nil>
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.584499   14328 log_handler.go:37] AWS API ValidateResponse: ec2metadata GetMetadata &{GetMetadata GET /meta-data/network/interfaces/macs/0e:2b:0f:5d:19:44/local-ipv4s <nil> <nil>} <nil> 200 OK
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.585693   14328 log_handler.go:32] AWS API Send: ec2metadata GetMetadata &{GetMetadata GET /meta-data/public-ipv4 <nil> <nil>} <nil>
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.585712   14328 log_handler.go:37] AWS API ValidateResponse: ec2metadata GetMetadata &{GetMetadata GET /meta-data/public-ipv4 <nil> <nil>} <nil> 200 OK
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.586952   14328 log_handler.go:32] AWS API Send: ec2metadata GetMetadata &{GetMetadata GET /meta-data/local-hostname <nil> <nil>} <nil>
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.586971   14328 log_handler.go:37] AWS API ValidateResponse: ec2metadata GetMetadata &{GetMetadata GET /meta-data/local-hostname <nil> <nil>} <nil> 200 OK
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.588774   14328 log_handler.go:32] AWS API Send: ec2metadata GetMetadata &{GetMetadata GET /meta-data/public-hostname <nil> <nil>} <nil>
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.588795   14328 log_handler.go:37] AWS API ValidateResponse: ec2metadata GetMetadata &{GetMetadata GET /meta-data/public-hostname <nil> <nil>} <nil> 200 OK
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.703286   14328 generic.go:183] GenericPLEG: Relisting
May 03 01:49:37 ip-172-18-13-135.ec2.internal atomic-openshift-node[14328]: I0503 01:49:37.805302   14328 kubelet.go:2103] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:

Comment 11 errata-xmlrpc 2018-05-17 06:43:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1566


Note You need to log in before you can comment on or make changes to this bug.