Bug 1808053 - [IPv6] Elasticsearch unable to perform oauth workflow because of hostname verification
Summary: [IPv6] Elasticsearch unable to perform oauth workflow because of hostname ver...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.5.0
Assignee: Periklis Tsirakidis
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks: 1808055
TreeView+ depends on / blocked
 
Reported: 2020-02-27 17:35 UTC by Periklis Tsirakidis
Modified: 2020-07-13 17:22 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:21:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 393 0 None closed Bug 1808053: Grab only public IPv6 fluentd address from node 2021-02-15 12:34:32 UTC
Github openshift elasticsearch-operator pull 249 0 None closed Bug 1808053: Define default master url for fabric8io k8s client 2021-02-15 12:34:32 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:22:13 UTC

Description Periklis Tsirakidis 2020-02-27 17:35:05 UTC
This bug was initially created as a copy of Bug #1806995

I am copying this bug because: 



Description of problem:
There are not default pattern in the kibana for both kubeadmin and non-kubeadmin users.

Version-Release number of selected component (if applicable):
Baremetal machine launched by IPv6
4.3.0-0.nightly-2020-02-17-205936-ipv6.1
clusterlogging.4.3.4-202002241021
elasticsearch-operator.4.3.4-202002241021


How reproducible:
Always

Steps to Reproduce:
1. Deploy clusterlogging in IPv6 clusters.
2. Loging kibana via kubeamdin
3. Loging kibana as non-kubeadmin

Actual results:
No default pattern in kibana.

Expected results:
The pod log can be displayed in kibana

Additional info:

Comment 3 Anping Li 2020-03-04 14:36:52 UTC
    openshift/ose-elasticsearch-operator:202003021217

1. Logs of rejected error in fluentd
2020-03-04 14:22:18 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch" location=nil tag="journal.system" time=2020-03-02 17:56:28.847640000 +0000 record={"_STREAM_ID"=>"3e8efe6791964d27b4c05630c6f804d5", "_SYSTEMD_INVOCATION_ID"=>"d97d864ae0e142158a8d4f227ff5c9ec", "systemd"=>{"t"=>{"BOOT_ID"=>"d519df282bb14d66939a9bcd540048fc", "CAP_EFFECTIVE"=>"3fffffffff", "CMDLINE"=>"/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --node-labels=node-role.kubernetes.io/master,node.openshift.io/os_id=rhcos --node-ip=fd2e:6f44:5dd8:c956::114 --address=fd2e:6f44:5dd8:c956::114 --minimum-container-ttl-duration=6m0s --cloud-provider= --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --v=3", "COMM"=>"hyperkube", "EXE"=>"/usr/bin/hyperkube", "GID"=>"0", "MACHINE_ID"=>"c44bc23a7e6a4668915dc7f7bf59ee44", "PID"=>"3139265", "SELINUX_CONTEXT"=>"system_u:system_r:unconfined_service_t:s0", "STREAM_ID"=>"3e8efe6791964d27b4c05630c6f804d5", "SYSTEMD_CGROUP"=>"/system.slice/kubelet.service", "SYSTEMD_INVOCATION_ID"=>"d97d864ae0e142158a8d4f227ff5c9ec", "SYSTEMD_SLICE"=>"system.slice", "SYSTEMD_UNIT"=>"kubelet.service", "TRANSPORT"=>"stdout", "UID"=>"0"}, "u"=>{"SYSLOG_FACILITY"=>"3", "SYSLOG_IDENTIFIER"=>"hyperkube"}}, "level"=>"info", "message"=>"MASTER_IP=\"fd2e:6f44:5dd8:c956::114\"", "hostname"=>"master-2.ocp-edge-cluster.qe.lab.redhat.com", "pipeline_metadata"=>{"collector"=>{"ipaddr4"=>"fd2e:6f44:5dd8:c956::114", "ipaddr6"=>"fd01::3:90be:d3ff:fe00:a6\nfe80::6cb7:28ff:fef5:2378", "inputname"=>"fluent-plugin-systemd", "name"=>"fluentd", "received_at"=>"2020-03-04T10:59:20.457506+00:00", "version"=>"1.7.4 1.6.0"}}, "@timestamp"=>"2020-03-02T17:56:28.847640+00:00", "viaq_index_name"=>".operations.2020.03.02", "viaq_msg_id"=>"NDU2MWMzNDctZWE3Mi00MzlmLWEyM2QtY2JmYzMzZDRkZjgy"}
2020-03-04 14:22:18 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch" location=nil tag="journal.system" time=2020-03-02 17:56:28.847640000 +0000 record={"_STREAM_ID"=>"3e8efe6791964d27b4c05630c6f804d5", "_SYSTEMD_INVOCATION_ID"=>"d97d864ae0e142158a8d4f227ff5c9ec", "systemd"=>{"t"=>{"BOOT_ID"=>"d519df282bb14d66939a9bcd540048fc", "CAP_EFFECTIVE"=>"3fffffffff", "CMDLINE"=>"/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --node-labels=node-role.kubernetes.io/master,node.openshift.io/os_id=rhcos --node-ip=fd2e:6f44:5dd8:c956::114 --address=fd2e:6f44:5dd8:c956::114 --minimum-container-ttl-duration=6m0s --cloud-provider= --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --v=3", "COMM"=>"hyperkube", "EXE"=>"/usr/bin/hyperkube", "GID"=>"0", "MACHINE_ID"=>"c44bc23a7e6a4668915dc7f7bf59ee44", "PID"=>"3139265", "SELINUX_CONTEXT"=>"system_u:system_r:unconfined_service_t:s0", "STREAM_ID"=>"3e8efe6791964d27b4c05630c6f804d5", "SYSTEMD_CGROUP"=>"/system.slice/kubelet.service", "SYSTEMD_INVOCATION_ID"=>"d97d864ae0e142158a8d4f227ff5c9ec", "SYSTEMD_SLICE"=>"system.slice", "SYSTEMD_UNIT"=>"kubelet.service", "TRANSPORT"=>"stdout", "UID"=>"0"}, "u"=>{"SYSLOG_FACILITY"=>"3", "SYSLOG_IDENTIFIER"=>"hyperkube"}}, "level"=>"info", "message"=>"if [[ \"${K8S_NODE_IP}\" == \"${MASTER_IP}\" ]]; then", "hostname"=>"master-2.ocp-edge-cluster.qe.lab.redhat.com", "pipeline_metadata"=>{"collector"=>{"ipaddr4"=>"fd2e:6f44:5dd8:c956::114", "ipaddr6"=>"fd01::3:90be:d3ff:fe00:a6\nfe80::6cb7:28ff:fef5:2378", "inputname"=>"fluent-plugin-systemd", "name"=>"fluentd", "received_at"=>"2020-03-04T10:59:20.458968+00:00", "version"=>"1.7.4 1.6.0"}}, "@timestamp"=>"2020-03-02T17:56:28.847640+00:00", "viaq_index_name"=>".operations.2020.03.02", "viaq_msg_id"=>"NjAwZDAzOTMtYzA0Ny00NWM2LWFkOGMtN2VkMWU4YWZmMjYw"}
2. e80::a81a:90ff:fe1c:9ee2' is not an IP string literal.

	... 48 more
[2020-03-04T14:29:38,415][DEBUG][o.e.a.b.TransportShardBulkAction] [elasticsearch-cdm-ammnz9zb-1] [.operations.2020.03.03][0] failed to execute bulk item (create) BulkShardRequest [[.operations.2020.03.03][0]] containing [4155] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [pipeline_metadata.collector.ipaddr6]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:298) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:468) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:591) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:396) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:465) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:484) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:383) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:465) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:484) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:383) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:93) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:66) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:277) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:530) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:507) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.prepareIndexOperationOnPrimary(TransportShardBulkAction.java:458) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:466) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:145) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:114) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:975) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:944) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:345) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:270) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:924) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:921) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:151) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1659) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:933) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:92) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:291) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:266) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:248) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at com.floragunn.searchguard.ssl.transport.SearchGuardSSLRequestHandler.messageReceivedDecorate(SearchGuardSSLRequestHandler.java:178) [search-guard-ssl-5.6.16.23-redhat-1.jar:5.6.16.23-redhat-1]
	at com.floragunn.searchguard.transport.SearchGuardRequestHandler.messageReceivedDecorate(SearchGuardRequestHandler.java:107) [search-guard-5-5.6.16.19-3-redhat-1.jar:?]
	at com.floragunn.searchguard.ssl.transport.SearchGuardSSLRequestHandler.messageReceived(SearchGuardSSLRequestHandler.java:92) [search-guard-ssl-5.6.16.23-redhat-1.jar:5.6.16.23-redhat-1]
	at com.floragunn.searchguard.SearchGuardPlugin$5$1.messageReceived(SearchGuardPlugin.java:493) [search-guard-5-5.6.16.19-3-redhat-1.jar:?]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:662) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:675) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
Caused by: java.lang.IllegalArgumentException: 'fd01::6:90be:d3ff:fe00:aa
fe80::a81a:90ff:fe1c:9ee2' is not an IP string literal.
	at org.elasticsearch.common.network.InetAddresses.forString(InetAddresses.java:333) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.IpFieldMapper.parseCreateField(IpFieldMapper.java:377) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:287) ~[elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	... 48 more

3. kibana plugin is red

Comment 6 Anping Li 2020-03-26 16:13:09 UTC
Get 500 error

$ oc logs kibana-686f885f48-6hd5j -c kibana
#The following values dynamically added from environment variable overrides:
Using NODE_OPTIONS: '--max_old_space_size=368' Memory setting is in MB
{"type":"log","@timestamp":"2020-03-26T15:29:46Z","tags":["error","elasticsearch","admin"],"pid":164,"message":"Request error, retrying\nHEAD https://elasticsearch.openshift-logging.svc.cluster.local:9200/ => getaddrinfo ENOTFOUND elasticsearch.openshift-logging.svc.cluster.local elasticsearch.openshift-logging.svc.cluster.local:9200"}
{"type":"log","@timestamp":"2020-03-26T15:29:46Z","tags":["status","plugin:elasticsearch.1","error"],"pid":164,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-03-26T15:29:59Z","tags":["status","plugin:elasticsearch.1","error"],"pid":164,"state":"red","message":"Status changed from red to red - Request Timeout after 3000ms","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2020-03-26T15:30:02Z","tags":["status","plugin:elasticsearch.1","error"],"pid":164,"state":"red","message":"Status changed from red to red - Unable to connect to Elasticsearch.","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2020-03-26T15:30:19Z","tags":["status","plugin:elasticsearch.1","error"],"pid":164,"state":"red","message":"Status changed from red to red - Request Timeout after 3000ms","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2020-03-26T15:30:27Z","tags":["status","plugin:elasticsearch.1","error"],"pid":164,"state":"red","message":"Status changed from red to red - Unable to connect to Elasticsearch.","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2020-03-26T15:30:33Z","tags":["status","plugin:elasticsearch.1","error"],"pid":164,"state":"red","message":"Status changed from red to red - Request Timeout after 3000ms","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2020-03-26T15:31:00Z","tags":["listening","info"],"pid":164,"message":"Server running at http://localhost:5601"}
{"type":"error","@timestamp":"2020-03-26T16:05:38Z","tags":[],"pid":164,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Internal Server Error :: {\"path\":\"/.kibana/doc/config%3A6.8.1\",\"query\":{},\"statusCode\":500,\"response\":\"{\\\"code\\\":500,\\\"message\\\":\\\"Internal Error\\\",\\\"error\\\":{}}\\n\"}\n    at respond (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:308:15)\n    at checkRespForFailure (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:267:7)\n    at HttpConnector.<anonymous> (/opt/app-root/src/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n    at IncomingMessage.wrapper (/opt/app-root/src/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n    at IncomingMessage.emit (events.js:203:15)\n    at endReadableNT (_stream_readable.js:1145:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/app/kibana","path":"/app/kibana","href":"/app/kibana"},"message":"Internal Server Error"}
{"type":"error","@timestamp":"2020-03-26T16:08:52Z","tags":[],"pid":164,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Internal Server Error :: {\"path\":\"/.kibana/doc/config%3A6.8.1\",\"query\":{},\"statusCode\":500,\"response\":\"{\\\"code\\\":500,\\\"message\\\":\\\"Internal Error\\\",\\\"error\\\":{}}\\n\"}\n    at respond (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:308:15)\n    at checkRespForFailure (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:267:7)\n    at HttpConnector.<anonymous> (/opt/app-root/src/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n    at IncomingMessage.wrapper (/opt/app-root/src/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n    at IncomingMessage.emit (events.js:203:15)\n    at endReadableNT (_stream_readable.js:1145:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/app/kibana","path":"/app/kibana","href":"/app/kibana"},"message":"Internal Server Error"}

$ oc logs kibana-686f885f48-6hd5j -c kibana-proxy
2020/03/26 15:29:42 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:5601/"
2020/03/26 15:29:42 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-logging:kibana
2020/03/26 15:29:42 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
2020/03/26 15:29:42 http.go:61: HTTP: listening on 127.0.0.1:4180
2020/03/26 15:29:42 http.go:107: HTTPS: listening on [::]:3000
I0326 15:29:42.347541       1 dynamic_serving_content.go:129] Starting serving::/secret/server-cert::/secret/server-key
2020/03/26 16:02:07 provider.go:574: Performing OAuth discovery against https://[fd02::1]/.well-known/oauth-authorization-server
2020/03/26 16:02:07 provider.go:614: 200 GET https://[fd02::1]/.well-known/oauth-authorization-server {
  "issuer": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com",
  "authorization_endpoint": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com/oauth/authorize",
  "token_endpoint": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2020/03/26 16:04:23 provider.go:574: Performing OAuth discovery against https://[fd02::1]/.well-known/oauth-authorization-server
2020/03/26 16:04:23 provider.go:614: 200 GET https://[fd02::1]/.well-known/oauth-authorization-server {
  "issuer": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com",
  "authorization_endpoint": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com/oauth/authorize",
  "token_endpoint": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2020/03/26 16:05:36 provider.go:574: Performing OAuth discovery against https://[fd02::1]/.well-known/oauth-authorization-server
2020/03/26 16:05:37 provider.go:614: 200 GET https://[fd02::1]/.well-known/oauth-authorization-server {
  "issuer": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com",
  "authorization_endpoint": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com/oauth/authorize",
  "token_endpoint": "https://oauth-openshift.apps.ocp-edge-cluster.qe.lab.redhat.com/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2020/03/26 16:05:37 provider.go:614: 200 GET https://[fd02::1]/apis/user.openshift.io/v1/users/~ {"kind":"User","apiVersion":"user.openshift.io/v1","metadata":{"name":"kube:admin","selfLink":"/apis/user.openshift.io/v1/users/kube%3Aadmin","creationTimestamp":null},"identities":null,"groups":["system:authenticated","system:cluster-admins"]}
2020/03/26 16:05:37 oauthproxy.go:675: [fd01:0:0:5::2]:49156 authentication complete Session{kube:admin token:true}

Comment 8 Jeff Cantrill 2020-04-06 13:51:09 UTC
@anli:

Moving this back to modified for verification as the issue in #c6 is not related to ingestion from an IPV6 field:

{"type":"error","@timestamp":"2020-03-26T16:05:38Z","tags":[],"pid":164,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Internal Server Error :: {\"path\":\"/.kibana/doc/config%3A6.8.1\",\"query\":{},\"statusCode\":500,\"response\":\"{\\\"code\\\":500,\\\"message\\\":\\\"Internal Error\\\",\\\"error\\\":{}}\\n\"}\n    at respond (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:308:15)\n    at checkRespForFailure (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:267:7)\n    at HttpConnector.<anonymous> (/opt/app-root/src/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n    at IncomingMessage.wrapper (/opt/app-root/src/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n    at IncomingMessage.emit (events.js:203:15)\n    at endReadableNT (_stream_readable.js:1145:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/app/kibana","path":"/app/kibana","href":"/app/kibana"},"message":"Internal Server Error"}

but related to use of Kibana and querying. We are in the process of finalizing permission issues related to using Kibana.  Please verify the original issue so that we are able to unblock https://bugzilla.redhat.com/show_bug.cgi?id=1808055

Comment 10 Anping Li 2020-04-07 14:42:44 UTC
Move to verified as comment 8.

Comment 12 errata-xmlrpc 2020-07-13 17:21:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.