Bug 1469711 - Kibana container grows in memory till out of memory
Kibana container grows in memory till out of memory
Status: ASSIGNED
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging (Show other bugs)
3.5.1
Unspecified Unspecified
unspecified Severity medium
: ---
: 3.5.z
Assigned To: Jeff Cantrill
Xia Zhao
:
Depends On: 1465464
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-11 13:17 EDT by Jeff Cantrill
Modified: 2017-08-18 04 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Memory was not being set properly Consequence: Fix: use underscores instead of dashes in memory switch Result: Memory settings are honored by nodejs runtim
Story Points: ---
Clone Of: 1465464
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
oc describe kibana (11.07 KB, text/plain)
2017-08-18 04:00 EDT, Xia Zhao
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Github origin-aggregated-logging/pull/512 None None None 2017-07-11 13:22 EDT

  None (edit)
Comment 3 Xia Zhao 2017-07-25 06:01:01 EDT
Set upped env and will monitor for 24 hours, result will be updated later.
Comment 5 Junqi Zhao 2017-07-26 02:22:45 EDT
Deployed logging 3.5.1 and after running for 24 hours, logging-kibana pod restarted 5 times, and describe logging-kibana pod, find OOMKilled info for kibana and kibana-proxy container.

# oc get po
NAME                          READY     STATUS    RESTARTS   AGE
logging-curator-1-vb5m8       1/1       Running   0          1d
logging-es-2jmip3wu-1-r0gz9   1/1       Running   0          1d
logging-fluentd-lgmvs         1/1       Running   0          1d
logging-fluentd-m7pxk         1/1       Running   0          1d
logging-kibana-1-np2p1        2/2       Running   5          1d

# oc describe po logging-kibana-1-np2p1
Name:			logging-kibana-1-np2p1
Namespace:		logging
Security Policy:	restricted
Node:			host-8-174-71.host.centralci.eng.rdu2.redhat.com/10.8.174.71
Start Time:		Tue, 25 Jul 2017 01:52:18 -0400
Labels:			component=kibana
			deployment=logging-kibana-1
			deploymentconfig=logging-kibana
			logging-infra=kibana
			provider=openshift
Status:			Running
IP:			10.129.0.17
Controllers:		ReplicationController/logging-kibana-1
Containers:
  kibana:
    Container ID:	docker://2b8f76048c872a1637350c7b9fdba65e1e392230c1df523ffe2d5b470b39cd2a
    Image:		brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-kibana:v3.5
    Image ID:		docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-kibana@sha256:2390630f859ca204f0cecfabfe11afa27230c6d0f2f7ae3f2f8f588752912fc8
    Port:		
    Limits:
      memory:	736Mi
    Requests:
      memory:		736Mi
    State:		Running
      Started:		Tue, 25 Jul 2017 23:18:32 -0400
    Last State:		Terminated
      Reason:		OOMKilled
      Exit Code:	137
      Started:		Tue, 25 Jul 2017 17:55:40 -0400
      Finished:		Tue, 25 Jul 2017 23:18:30 -0400
    Ready:		True
    Restart Count:	4
    Volume Mounts:
      /etc/kibana/keys from kibana (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from aggregated-logging-kibana-token-mnr00 (ro)
    Environment Variables:
      ES_HOST:			logging-es
      ES_PORT:			9200
      KIBANA_MEMORY_LIMIT:	771751936 (limits.memory)
  kibana-proxy:
    Container ID:	docker://ec4fc1e68e2e8079db2a5e1930939541602fd2f05be145732ca29f16e4e5073e
    Image:		brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-auth-proxy:v3.5
    Image ID:		docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-auth-proxy@sha256:a08c0ff7cbbfdc2a57377438d90431034c7a22ed49eb3a17af365135f4fb2d02
    Port:		3000/TCP
    Limits:
      memory:	96Mi
    Requests:
      memory:		96Mi
    State:		Running
      Started:		Tue, 25 Jul 2017 08:07:07 -0400
    Last State:		Terminated
      Reason:		OOMKilled
      Exit Code:	137
      Started:		Tue, 25 Jul 2017 01:52:31 -0400
      Finished:		Tue, 25 Jul 2017 08:07:05 -0400
    Ready:		True
    Restart Count:	1
    Volume Mounts:
      /secret from kibana-proxy (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from aggregated-logging-kibana-token-mnr00 (ro)
    Environment Variables:
      OAP_BACKEND_URL:			http://localhost:5601
      OAP_AUTH_MODE:			oauth2
      OAP_TRANSFORM:			user_header,token_header
      OAP_OAUTH_ID:			kibana-proxy
      OAP_MASTER_URL:			https://kubernetes.default.svc.cluster.local
      OAP_PUBLIC_MASTER_URL:		https://host-8-175-4.host.centralci.eng.rdu2.redhat.com:8443
      OAP_LOGOUT_REDIRECT:		https://host-8-175-4.host.centralci.eng.rdu2.redhat.com:8443/console/logout
      OAP_MASTER_CA_FILE:		/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      OAP_DEBUG:			False
      OAP_OAUTH_SECRET_FILE:		/secret/oauth-secret
      OAP_SERVER_CERT_FILE:		/secret/server-cert
      OAP_SERVER_KEY_FILE:		/secret/server-key
      OAP_SERVER_TLS_FILE:		/secret/server-tls.json
      OAP_SESSION_SECRET_FILE:		/secret/session-secret
      OCP_AUTH_PROXY_MEMORY_LIMIT:	100663296 (limits.memory)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  kibana:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	logging-kibana
  kibana-proxy:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	logging-kibana-proxy
  aggregated-logging-kibana-token-mnr00:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	aggregated-logging-kibana-token-mnr00
QoS Class:	Burstable
Tolerations:	<none>
No events.
Comment 8 Xia Zhao 2017-08-08 05:17:48 EDT
Deployed logging 3.5.1 and run this script twice:

for i in {1..300};  do    curl --fail --max-time 10 -H "Authorization: Bearer `oc whoami -t`" https://${kibana-route}/elasticsearch/ -sk > /dev/null;  done

logging-kibana pod restarted 2 times, and describe logging-kibana pod, find OOMKilled info for kibana-proxy container:


# oc get po
NAME                          READY     STATUS    RESTARTS   AGE
logging-curator-1-z5fpp       1/1       Running   0          36m
logging-es-cimeql32-1-k75mv   1/1       Running   0          36m
logging-fluentd-5dxjp         1/1       Running   0          37m
logging-fluentd-rjhj4         1/1       Running   0          37m
logging-kibana-1-ztcdj        2/2       Running   2          36m

# oc describe po logging-kibana-1-ztcdj
Name:			logging-kibana-1-ztcdj
Namespace:		logging
Security Policy:	restricted
Node:			host-8-241-32.host.centralci.eng.rdu2.redhat.com/172.16.120.71
Start Time:		Tue, 08 Aug 2017 04:34:18 -0400
Labels:			component=kibana
			deployment=logging-kibana-1
			deploymentconfig=logging-kibana
			logging-infra=kibana
			provider=openshift
Status:			Running
IP:			10.129.0.18
Controllers:		ReplicationController/logging-kibana-1
Containers:
  kibana:
    Container ID:	docker://dd484e0867b6f0285464613f78756f3fcbafda17213480732a5ca61be6a22672
    Image:		brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-kibana:v3.5
    Image ID:		docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-kibana@sha256:bc45d31e4e9daab9cfbe3deebc4a22e4db0d9b1bea2d3f495989885aa9041cca
    Port:		
    Limits:
      memory:	736Mi
    Requests:
      memory:		736Mi
    State:		Running
      Started:		Tue, 08 Aug 2017 04:34:29 -0400
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /etc/kibana/keys from kibana (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from aggregated-logging-kibana-token-73267 (ro)
    Environment Variables:
      ES_HOST:			logging-es
      ES_PORT:			9200
      KIBANA_MEMORY_LIMIT:	771751936 (limits.memory)
  kibana-proxy:
    Container ID:	docker://c8cc718caadc99579218ed7689fc1e6667dd7af02cde7b7f8250ea2e518f840a
    Image:		brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-auth-proxy:v3.5
    Image ID:		docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-auth-proxy@sha256:a08c0ff7cbbfdc2a57377438d90431034c7a22ed49eb3a17af365135f4fb2d02
    Port:		3000/TCP
    Limits:
      memory:	96Mi
    Requests:
      memory:		96Mi
    State:		Running
      Started:		Tue, 08 Aug 2017 04:58:58 -0400
    Last State:		Terminated
      Reason:		OOMKilled
      Exit Code:	137
      Started:		Tue, 08 Aug 2017 04:50:28 -0400
      Finished:		Tue, 08 Aug 2017 04:58:41 -0400
    Ready:		True
    Restart Count:	2
    Volume Mounts:
      /secret from kibana-proxy (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from aggregated-logging-kibana-token-73267 (ro)
    Environment Variables:
      OAP_BACKEND_URL:			http://localhost:5601
      OAP_AUTH_MODE:			oauth2
      OAP_TRANSFORM:			user_header,token_header
      OAP_OAUTH_ID:			kibana-proxy
      OAP_MASTER_URL:			https://kubernetes.default.svc.cluster.local
      OAP_PUBLIC_MASTER_URL:		https://host-8-241-89.host.centralci.eng.rdu2.redhat.com:8443
      OAP_LOGOUT_REDIRECT:		https://host-8-241-89.host.centralci.eng.rdu2.redhat.com:8443/console/logout
      OAP_MASTER_CA_FILE:		/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      OAP_DEBUG:			False
      OAP_OAUTH_SECRET_FILE:		/secret/oauth-secret
      OAP_SERVER_CERT_FILE:		/secret/server-cert
      OAP_SERVER_KEY_FILE:		/secret/server-key
      OAP_SERVER_TLS_FILE:		/secret/server-tls.json
      OAP_SESSION_SECRET_FILE:		/secret/session-secret
      OCP_AUTH_PROXY_MEMORY_LIMIT:	100663296 (limits.memory)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  kibana:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	logging-kibana
  kibana-proxy:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	logging-kibana-proxy
  aggregated-logging-kibana-token-73267:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	aggregated-logging-kibana-token-73267
QoS Class:	Burstable
Comment 10 Xia Zhao 2017-08-18 04:00:10 EDT
Issue reproduced with logging-kibana:3.5.0-29, attached the oc describe info of kibana pod:

$ oc get po
NAME                          READY     STATUS    RESTARTS   AGE
logging-curator-1-xtfb4       1/1       Running   0          1h
logging-es-vajozve8-1-jjft3   1/1       Running   0          1h
logging-fluentd-pjx8h         1/1       Running   0          1h
logging-fluentd-xb928         1/1       Running   0          1h
logging-kibana-1-wfrbn        2/2       Running   2          1h 

# openshift version
openshift v3.5.5.31.19
kubernetes v1.5.2+43a9be4
etcd 3.1.0

Image tested with:
openshift3/logging-kibana          3.5.0-29            1907bbf06cd6        37 hours ago        613.8 MB
Comment 11 Xia Zhao 2017-08-18 04:00 EDT
Created attachment 1315086 [details]
oc describe kibana

Note You need to log in before you can comment on or make changes to this bug.