Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1340785

Summary: JWS logging problem
Product: OpenShift Container Platform Reporter: Alexander Koksharov <akokshar>
Component: ImageStreamsAssignee: kconner
Status: CLOSED DEFERRED QA Contact: Tomas Schlosser <tschloss>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.1.0CC: aos-bugs, erich, jokerman, kconner, mmccomas
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-07-19 15:41:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Alexander Koksharov 2016-05-30 09:40:10 UTC
Description of problem:
I am using https://docs.openshift.com/enterprise/3.1/using_images/xpaas_images/jws.html and have recognized that the following access log is written 

$ ls
localhost_access_log.2016-05-26.txt
$ pwd
/opt/webserver/logs

I am wondering 
* will the access log be visible inside the EFK stack?
* what will happen when the amount/size of access log is growing overtime. How will docker deal with this?


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
I have installed jws in my lab an did some the check:
- access log file:
root@master1 # oc exec jws-app-1-79ven -- cat /opt/webserver/logs/localhost_access_log.2016-05-30.txt
::1 - MMSGkRf7 [30/May/2016:03:25:29 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
::1 - MMSGkRf7 [30/May/2016:03:25:39 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
::1 - MMSGkRf7 [30/May/2016:03:25:49 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
::1 - MMSGkRf7 [30/May/2016:03:25:59 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
::1 - MMSGkRf7 [30/May/2016:03:26:09 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
....

- No access logs stored in logs on host machine. So, these logs wont be collected by fluentd:

root@worknode2 # grep -R -i jmxproxy ./*
root@worknode2 # pwd
/var/lib

Looks like log rotation for /opt/webserver/logs is not configured as well. System can run out of memory.

Expected results:


Additional info:

Comment 7 Eric Rich 2016-07-19 15:42:35 UTC
*** Bug 1342408 has been marked as a duplicate of this bug. ***