Bug 1340785 - JWS logging problem
Summary: JWS logging problem
Status: CLOSED DEFERRED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: kconner
QA Contact: Tomas Schlosser
URL:
Whiteboard:
Keywords:
: 1342408 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-30 09:40 UTC by Alexander Koksharov
Modified: 2016-07-19 15:42 UTC (History)
5 users (show)

(edit)
Clone Of:
(edit)
Last Closed: 2016-07-19 15:41:27 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
JBoss Issue Tracker CLOUD-57 Major Closed Tomcat's access log valve logs to file in container 2017-07-25 14:39 UTC

Internal Trackers: 1342408

Description Alexander Koksharov 2016-05-30 09:40:10 UTC
Description of problem:
I am using https://docs.openshift.com/enterprise/3.1/using_images/xpaas_images/jws.html and have recognized that the following access log is written 

$ ls
localhost_access_log.2016-05-26.txt
$ pwd
/opt/webserver/logs

I am wondering 
* will the access log be visible inside the EFK stack?
* what will happen when the amount/size of access log is growing overtime. How will docker deal with this?


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
I have installed jws in my lab an did some the check:
- access log file:
root@master1 # oc exec jws-app-1-79ven -- cat /opt/webserver/logs/localhost_access_log.2016-05-30.txt
::1 - MMSGkRf7 [30/May/2016:03:25:29 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
::1 - MMSGkRf7 [30/May/2016:03:25:39 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
::1 - MMSGkRf7 [30/May/2016:03:25:49 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
::1 - MMSGkRf7 [30/May/2016:03:25:59 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
::1 - MMSGkRf7 [30/May/2016:03:26:09 -0400] "GET /manager/jmxproxy/?get=Catalina%3Atype%3DServer&att=stateName HTTP/1.1" 200 64
....

- No access logs stored in logs on host machine. So, these logs wont be collected by fluentd:

root@worknode2 # grep -R -i jmxproxy ./*
root@worknode2 # pwd
/var/lib

Looks like log rotation for /opt/webserver/logs is not configured as well. System can run out of memory.

Expected results:


Additional info:

Comment 7 Eric Rich 2016-07-19 15:42:35 UTC
*** Bug 1342408 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.