Bug 1739462 - No logs are received when CDM_UNDEFINED_TO_STRING=true
Summary: No logs are received when CDM_UNDEFINED_TO_STRING=true
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.2.0
Assignee: Rich Megginson
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-09 11:25 UTC by Anping Li
Modified: 2019-08-28 07:09 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-28 07:09:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ds_rsyslog configuration (8.17 KB, text/plain)
2019-08-09 11:26 UTC, Anping Li
no flags Details
rsyslog debug logs (1.80 MB, application/gzip)
2019-08-09 11:26 UTC, Anping Li
no flags Details

Description Anping Li 2019-08-09 11:25:41 UTC
Description of problem:
No log is sent to ES when CDM_UNDEFINED_TO_STRING=true


Version-Release number of selected component (if applicable):
4.2 rsyslog

How reproducible:
always

Steps to Reproduce:
1.
oc new-project json2
oc new-app --docker-image=docker.io/chunyunchen/log4j2-ex:latest
2. oc set env ds/fluentd MERGE_JSON_LOG=true, CDM_UNDEFINED_TO_STRING=true
3. es_util '--query=_cat/indices?v'
green  open   project.json2.f4e19c79-ba91-11e9-a0b7-0288b24e4984.2019.08.09 59dTIst6QRyX7TrAIQ5eDw   1   0          0            0       162b           162b


Actual results:
No logs in index  project.json2.f4e19c79-ba91-11e9-a0b7-0288b24e4984.2019.08.09

Expected results:
The logs can be send to  project.json2.f4e19c79-ba91-11e9-a0b7-0288b24e4984.2019.08.09

Additional info:

Comment 1 Anping Li 2019-08-09 11:26:29 UTC
Created attachment 1602140 [details]
ds_rsyslog configuration

Comment 2 Anping Li 2019-08-09 11:26:57 UTC
Created attachment 1602141 [details]
rsyslog debug logs

Comment 3 Rich Megginson 2019-08-09 14:13:39 UTC
I think https://github.com/openshift/origin-aggregated-logging/pull/1721 fixes this BZ as well.

Comment 7 Anping Li 2019-08-15 15:05:24 UTC
get stuck in openshift/ose-logging-rsyslog/images/v4.2.0-201908131437

Comment 8 Rich Megginson 2019-08-15 15:28:12 UTC
(In reply to Anping Li from comment #7)
> get stuck in openshift/ose-logging-rsyslog/images/v4.2.0-201908131437

I don't understand - how is this different than https://bugzilla.redhat.com/show_bug.cgi?id=1735530#c7 ?

Comment 9 Anping Li 2019-08-15 15:44:19 UTC
oc new-app --docker-image=docker.io/chunyunchen/log4j2-ex:latest, it generator different format logs. I am not sure if the json format cause this issue.

Comment 10 Anping Li 2019-08-15 15:47:09 UTC
Note,the docs.count growth stopped for all indices.

Comment 11 Anping Li 2019-08-15 15:51:07 UTC
elasticsearch.log

[2019-08-15T15:50:28,007][DEBUG][o.e.a.b.TransportShardBulkAction] [elasticsearch-cdm-yjd6pz9i-1] [project.json2.0e517c49-bf68-11e9-907d-026324a9d462.2019.08.15][1] failed to execute bulk item (create) BulkShardRequest [[project.json2.0e517c49-bf68-11e9-907d-026324a9d462.2019.08.15][1]] containing [index {[project.json2.0e517c49-bf68-11e9-907d-026324a9d462.2019.08.15][com.redhat.viaq.common][B78E8D65E5F541489A1DDD0186389AE9], source[{"":"{\"@timestamp\":\"2019-08-15T15:50:28.001163881+00:00\",\"docker\":{\"container_id\":\"4433d0d4427e741c986304563db4d538f3557bf138c8f398b44353226fd154d5\"},\"file\":\"\/var\/log\/containers\/log4j2-ex-1-nkw9b_json2_log4j2-ex-4433d0d4427e741c986304563db4d538f3557bf138c8f398b44353226fd154d5.log\",\"hostname\":\"ip-10-0-134-247.ap-southeast-2.compute.internal\",\"kubernetes\":{\"container_name\":\"log4j2-ex\",\"creation_timestamp\":\"2019-08-15T14:22:04Z\",\"host\":\"ip-10-0-134-247.ap-southeast-2.compute.internal\",\"labels\":{\"app\":\"log4j2-ex\",\"deployment\":\"log4j2-ex-1\",\"deploymentconfig\":\"log4j2-ex\"},\"master_url\":\"https:\/\/kubernetes.default.svc.cluster.local:443\",\"namespace_id\":\"0e517c49-bf68-11e9-907d-026324a9d462\",\"namespace_name\":\"json2\",\"pod_id\":\"228c7d00-bf68-11e9-88f7-0ae54a8bad22\",\"pod_name\":\"log4j2-ex-1-nkw9b\"},\"level\":\"unknown\",\"message\":\"{\\\"@version\\\":1,\\\"@timestamp\\\":\\\"2019-08-15T11:50:28.000-0400\\\",\\\"sequence\\\":5236,\\\"loggerClassName\\\":\\\"org.jboss.logmanager.Logger\\\",\\\"loggerName\\\":\\\"stdout\\\",\\\"level\\\":\\\"INFO\\\",\\\"threadName\\\":\\\"EJB default - 8\\\",\\\"message\\\":\\\"timer: Hello from control: 1565884228000\\\",\\\"threadId\\\":149,\\\"mdc\\\":{},\\\"ndc\\\":\\\"\\\",\\\"log-handler\\\":\\\"CONSOLE\\\"}\",\"offset\":\"1830533\",\"pipeline_metadata\":{\"collector\":{\"inputname\":\"imfile\",\"ipaddr4\":\"10.0.134.247\",\"name\":\"rsyslog\",\"original_raw_message\":\"{\\\"@version\\\":1,\\\"@timestamp\\\":\\\"2019-08-15T11:50:28.000-0400\\\",\\\"sequence\\\":5236,\\\"loggerClassName\\\":\\\"org.jboss.logmanager.Logger\\\",\\\"loggerName\\\":\\\"stdout\\\",\\\"level\\\":\\\"INFO\\\",\\\"threadName\\\":\\\"EJB default - 8\\\",\\\"message\\\":\\\"timer: Hello from control: 1565884228000\\\",\\\"threadId\\\":149,\\\"mdc\\\":{},\\\"ndc\\\":\\\"\\\",\\\"log-handler\\\":\\\"CONSOLE\\\"}\",\"received_at\":\"2019-08-15T15:50:28.001254+00:00\",\"version\":\"8.37.0-9.el7 \"}}}"}]}]
org.elasticsearch.index.mapper.MapperParsingException: failed to parse
	at org.elasticsearch.index.mapper.DocumentParser.wrapInMapperParsingException(DocumentParser.java:176) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:69) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:277) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:530) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:507) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.prepareIndexOperationOnPrimary(TransportShardBulkAction.java:458) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:466) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:145) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:114) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:975) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:944) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:345) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:270) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:924) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:921) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:151) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1659) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:933) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:92) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:291) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:266) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:248) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at com.floragunn.searchguard.ssl.transport.SearchGuardSSLRequestHandler.messageReceivedDecorate(SearchGuardSSLRequestHandler.java:178) [search-guard-ssl-5.6.13.23-redhat-1.jar:5.6.13.23-redhat-1]
	at com.floragunn.searchguard.transport.SearchGuardRequestHandler.messageReceivedDecorate(SearchGuardRequestHandler.java:107) [search-guard-5-5.6.13.19-2-redhat-1.jar:?]
	at com.floragunn.searchguard.ssl.transport.SearchGuardSSLRequestHandler.messageReceived(SearchGuardSSLRequestHandler.java:92) [search-guard-ssl-5.6.13.23-redhat-1.jar:5.6.13.23-redhat-1]
	at com.floragunn.searchguard.SearchGuardPlugin$5$1.messageReceived(SearchGuardPlugin.java:493) [search-guard-5-5.6.13.19-2-redhat-1.jar:?]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:662) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:675) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_222]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_222]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
Caused by: java.lang.IllegalArgumentException: object field starting or ending with a [.] makes object resolution ambiguous: []
	at org.elasticsearch.index.mapper.DocumentParser.splitAndValidatePath(DocumentParser.java:183) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.mapper.DocumentParser.getMapper(DocumentParser.java:938) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:589) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:396) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:93) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:66) ~[elasticsearch-5.6.13.redhat-1.jar:5.6.13.redhat-1]
	... 34 more

Comment 12 Noriko Hosoi 2019-08-15 16:57:13 UTC
(In reply to Anping Li from comment #9)
> oc new-app --docker-image=docker.io/chunyunchen/log4j2-ex:latest, it
> generator different format logs. I am not sure if the json format cause this
> issue.

Thanks for the test log generator info.  The log /var/log/containers/log4j2-ex-1-5925p_openshift-logging_log4j2-ex-121b086772f2b0908504c50d203808d897bd0e0653f95c30831806c21c65b3ac.log looks interesting...

It started with this kind of java info:
2019-08-15T16:22:39.354769681+00:00 stderr F WARNING: Environment variable OPENSHIFT_KUBE_PING_NAMESPACE undefined. Clustering will be unavailable. Please refer to the documentation for configuration.
2019-08-15T16:22:39.355854189+00:00 stderr F WARNING: No password defined for JGroups cluster. AUTH protocol will be disabled. Please define JGROUPS_CLUSTER_PASSWORD.
2019-08-15T16:22:39.366737257+00:00 stdout F Running jboss-eap-6/eap64-openshift image, version 1.3-13
2019-08-15T16:22:39.514824367+00:00 stdout F =========================================================================
2019-08-15T16:22:39.514824367+00:00 stdout F 
2019-08-15T16:22:39.515708194+00:00 stdout F   JBoss Bootstrap Environment
2019-08-15T16:22:39.515708194+00:00 stdout F 

with this tricky ESC chars in it...
2019-08-15T16:22:59.078473162+00:00 stdout F Jolokia: Agent started with URL https://10.131.0.44:8778/jolokia/
2019-08-15T16:22:59.287898160+00:00 stdout F ESC[0m12:22:59,281 INFO  [org.jboss.modules] (main) JBoss Modules version 1.3.7.Final-redhat-1
2019-08-15T16:22:59.287898160+00:00 stdout P ESC[0m
2019-08-15T16:22:59.475081880+00:00 stdout F ESC[0m12:22:59,472 INFO  [org.jboss.msc] (main) JBoss MSC version 1.1.5.Final-redhat-1
2019-08-15T16:22:59.475081880+00:00 stdout P ESC[0m
2019-08-15T16:22:59.544556517+00:00 stdout F ESC[0m12:22:59,544 INFO  [org.jboss.as] (MSC service thread 1-3) JBAS015899: JBoss EAP 6.4.4.GA (AS 7.5.4.Final-redhat-4) starting

then it starts logging these json format test data...
2019-08-15T16:23:01.044318945+00:00 stdout P ESC[0m
2019-08-15T16:23:01.265916213+00:00 stdout F {"@version":1,"@timestamp":"2019-08-15T12:23:01.260-0400","sequence":12,"loggerClassName":"org.jboss.as.connector.logging.ConnectorLogger_$logger","loggerName":"org.jboss.as.connector.subsystems.datasources","level":"INFO","threadName":"ServerService Thread Pool -- 29","message":"JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)","threadId":55,"mdc":{},"ndc":"","log-handler":"CONSOLE"}
2019-08-15T16:23:01.289948399+00:00 stdout F {"@version":1,"@timestamp":"2019-08-15T12:23:01.288-0400","sequence":13,"loggerClassName":"org.jboss.as.clustering.infinispan.InfinispanLogger_$logger","loggerName":"org.jboss.as.clustering.infinispan","level":"INFO","threadName":"ServerService Thread Pool -- 33","message":"JBAS010280: Activating Infinispan subsystem.","threadId":59,"mdc":{},"ndc":"","log-handler":"CONSOLE"}

Maybe, MERGE_JSON_LOG=true may not be appropriate for this json format log?

Comment 13 Anping Li 2019-08-16 02:37:06 UTC
qitang point out there is a configuration error 'MERGE_JSON_LOG=true,'.  after correct it, the ES can received logs.  could we force the values to 'true' or 'false'?

cat /var/lib/rsyslog.pod/undefined.json 
{"CDM_USE_UNDEFINED" : false, "CDM_DEFAULT_KEEP_FIELDS" : "CEE,time,@timestamp,aushape,ci_job,collectd,docker,fedora-ci,file,foreman,geoip,hostname,ipaddr4,ipaddr6,kubernetes,level,message,namespace_name,namespace_uuid,offset,openstack,ovirt,pid,pipeline_metadata,rsyslog,service,systemd,tags,testcase,tlog,viaq_msg_id", "CDM_EXTRA_KEEP_FIELDS" : "", "CDM_UNDEFINED_NAME" : "undefined", "CDM_KEEP_EMPTY_FIELDS" : "message", "CDM_UNDEFINED_TO_STRING" : true, "CDM_UNDEFINED_DOT_REPLACE_CHAR" : "UNUSED", "CDM_UNDEFINED_MAX_NUM_FIELDS" : -1, "MERGE_JSON_LOG" : true,, "UNDEFINED_DEBUG" : false}

Comment 14 Rich Megginson 2019-08-16 02:48:41 UTC
(In reply to Anping Li from comment #13)
> qitang point out there is a configuration error 'MERGE_JSON_LOG=true,'. 
> after correct it, the ES can received logs.  could we force the values to
> 'true' or 'false'?

Sure.  File a bug to have both fluentd and rsyslog (and elasticsearch, kibana, curator) perform input validation on _all_ environment variable values.  That will take quite a bit of work . . .


Note You need to log in before you can comment on or make changes to this bug.