Bug 1791231 - IPv6] Application should service on Address [::]:port
Summary: IPv6] Application should service on Address [::]:port
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.5.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks: 1814851
TreeView+ depends on / blocked
 
Reported: 2020-01-15 09:43 UTC by Anping Li
Modified: 2020-05-04 11:24 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1794801 1814851 (view as bug list)
Environment:
Last Closed: 2020-05-04 11:24:10 UTC
Target Upstream Version:


Attachments (Terms of Use)
The cluster elasticsearch couldn't be started (47.84 KB, application/gzip)
2020-01-15 16:23 UTC, Anping Li
no flags Details
Logs and resources (23.21 KB, application/gzip)
2020-01-30 09:15 UTC, Anping Li
no flags Details
elasticsearch.log (14.15 KB, text/plain)
2020-02-03 16:17 UTC, Anping Li
no flags Details
The elasticsearch logs when host=_local_ and _global_ (9.03 KB, application/gzip)
2020-03-16 11:56 UTC, Anping Li
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift elasticsearch-operator pull 219 0 None closed Bug 1791231: Make addresses ipv6 compatible 2021-02-04 09:20:33 UTC
Github openshift elasticsearch-operator pull 232 0 None closed Bug 1791231: Set network.host to _site_ and let ES bind it 2021-02-04 09:20:33 UTC
Github openshift elasticsearch-operator pull 270 0 None closed Bug 1811867: Fix binding to ipv6 2021-02-04 09:20:33 UTC
Github openshift origin-aggregated-logging pull 1824 0 None closed Bug 1791231: Use hostname for utils in support of ipv6 2021-02-04 09:20:33 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:24:40 UTC

Description Anping Li 2020-01-15 09:43:04 UTC
Description of problem:

Some cluster logging appications are service on IPv4 address.  the application should service on IPv6 Address [xx::xx]:port in IPv6 clusters.

For example:

$oc logs elasticsearch-cdm-b0tadi24-1-f86f8948d-8v257 -c proxy
2020/01/15 08:46:49 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-logging:elasticsearch
2020/01/15 08:46:49 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2020/01/15 08:46:49 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2020/01/15 08:46:49 oauthproxy.go:200: mapping path "/" => upstream "https://127.0.0.1:9200/"
2020/01/15 08:46:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-logging:elasticsearch
2020/01/15 08:46:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
2020/01/15 08:46:49 oauthproxy.go:249: WARN: Configured to pass client specified bearer token upstream.
      Only use this option if you're sure that the upstream will not leak or abuse the given token.
      Bear in mind that the token could be a long lived token or hard to revoke.
2020/01/15 08:46:49 http.go:58: HTTP: listening on 127.0.0.1:4180
2020/01/15 08:46:49 http.go:96: HTTPS: listening on [::]:60000


Version-Release number of selected component (if applicable):

Cluster Version:  4.3.0-0.nightly-2020-01-09-234847-ipv6.

image: quay.io/openshift/origin-logging-elasticsearch5:latest
imageID: quay.io/openshift/origin-logging-elasticsearch5@sha256:13cec4582e1b359b52980a76e682418aa91c8379e376b3e4b94653accf3429ee
image: quay.io/openshift/origin-oauth-proxy:latest
imageID: quay.io/openshift/origin-oauth-proxy@sha256:9bc48eb62fe6cc492ffdd28d874dbe7d0ebe9d24a53e230864644393fdb759b9



Steps to Reproduce:
1. Deploy elasticsearch pod on IPv6 Cluseters. 
2. Deploy catalogsource
3. Deploy clusterlogging

Actual results:
2020/01/15 08:46:49 oauthproxy.go:200: mapping path "/" => upstream "https://127.0.0.1:9200/"
2020/01/15 08:46:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-logging:elasticsearch
2020/01/15 08:46:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
2020/01/15 08:46:49 oauthproxy.go:249: WARN: Configured to pass client specified bearer token upstream.
      Only use this option if you're sure that the upstream will not leak or abuse the given token.
      Bear in mind that the token could be a long lived token or hard to revoke.
2020/01/15 08:46:49 http.go:58: HTTP: listening on 127.0.0.1:4180
2020/01/15 08:46:49 http.go:96: HTTPS: listening on [::]:60000


Expected results:
https://[::]:9200"
2020/01/15 08:46:49 http.go:58: HTTP: listening on [::]:4180
2020/01/15 08:46:49 http.go:96: HTTPS: listening on [::]:60000

Comment 2 Anping Li 2020-01-15 16:23:58 UTC
Created attachment 1652494 [details]
The cluster elasticsearch couldn't be started

If the Elasticsearch node_num>1, the elasticsearch couldn't be started.
From the logs, It should be caused ip address issue.

[2020-01-15T08:39:16,300][WARN ][o.e.d.z.ZenDiscovery     ] [elasticsearch-cdm-af2oqcim-1] failed to connect to master [{elasticsearch-cdm-af2oqcim-2}{UztA0h6fTb2N8ypq0SoffA}{TFpDhDfdQbeocFqtvD2RCQ}{127.0.0.1}{127.0.0.1:9300}], retrying...
org.elasticsearch.transport.ConnectTransportException: [elasticsearch-cdm-af2oqcim-2][127.0.0.1:9300] handshake failed. unexpected remote node {elasticsearch-cdm-af2oqcim-1}{t5KWT3aBR3aQ3fjgUJehnw}{uLNB84XRSmahu1M1a-73-g}{127.0.0.1}{127.0.0.1:9300}

Comment 4 Anping Li 2020-01-30 09:15:43 UTC
Created attachment 1656452 [details]
Logs and resources

Comment 6 Jeff Cantrill 2020-01-30 21:46:48 UTC
@anping,

If you put the cluster into unmanaged and then edit the elasticsearch configmap to have "network.host: _global_" the ES cluster will form.  Can you please investigate why the init is failing? I think this is what is keeping it from becoming 'ready'.

Comment 8 Anping Li 2020-02-03 16:17:00 UTC
Created attachment 1657374 [details]
elasticsearch.log

Comment 9 Jeff Cantrill 2020-02-12 16:06:39 UTC
Moving to ON_QA as it looks like its already servicing on ipv6:


``
[2020-02-12T16:01:03,240][INFO ][o.e.t.TransportService   ] [elasticsearch-cdm-i2wrrcxr-1] publish_address {10.129.2.14:9300}, bound_addresses {[::]:9300}
[2020-02-12T16:01:06,404][INFO ][c.f.s.h.SearchGuardHttpServerTransport] [elasticsearch-cdm-i2wrrcxr-1] publish_address {10.129.2.14:9200}, bound_addresses {[::]:9200}


```

Comment 10 Anping Li 2020-02-13 02:09:32 UTC
Verified using upstream image

Comment 11 Anping Li 2020-03-10 03:10:17 UTC
Similar with https://bugzilla.redhat.com/show_bug.cgi?id=1794801#c3, the error wasn't fixed. re-open it.

Comment 12 Jeff Cantrill 2020-03-10 19:14:27 UTC
The fact the ES cluster is not ready is a separate issue to that identified here.  Viewing the logs I see the following in the logs which indicates it was bound to an ipv6 address for intercluster communication and REST service:

[2020-02-24T14:18:45,010][INFO ][o.e.t.TransportService   ] [elasticsearch-cdm-0e3iv7u6-1] publish_address {127.0.0.1:9300}, bound_addresses {[::]:9300}
[2020-02-24T14:19:15,067][INFO ][c.f.s.h.SearchGuardHttpServerTransport] [elasticsearch-cdm-0e3iv7u6-1] publish_address {127.0.0.1:9200}, bound_addresses {[::]:9200}

This means ipv6 atleast works.  The logs from the other BZ do indicate that a master could not elected and that seeding failed.  Can you at least post the elasticsearch yaml. I'm wondering if the server network.host is correct

Comment 13 Anping Li 2020-03-16 11:36:28 UTC
There isn't any special configuration in elasticsearch.yaml. All are enabled by operators. I will paste the result with differnet network.host soon.

Comment 14 Anping Li 2020-03-16 11:56:54 UTC
Created attachment 1670517 [details]
The elasticsearch logs when host=_local_ and _global_

Couldn't find the master when network.host = 0.0.0.0 or _local_ or _global_

tls error when network.host=_site_.

oc logs -c elasticsearch elasticsearch-cdm-ghg9nt7q-1-6b899d65dc-7crb5
Error from server: Get https://[fd2e:6f44:5dd8:c956::121]:10250/containerLogs/openshift-logging/elasticsearch-cdm-ghg9nt7q-1-6b899d65dc-7crb5/elasticsearch: remote error: tls: internal error

Comment 15 Anping Li 2020-03-16 13:03:11 UTC
Built images with net-tools. we can found all expected port are LISTEN.  the connection had been ESTABLISHED on 9300 and 9200 . (in test 1, one node is broken.  ES pod report tls error). that is intresting, why master couldn't be selected.

network.host: 0.0.0.0

Error:

[2020-03-16T12:52:24,246][INFO ][i.f.e.p.OpenshiftRequestContextFactory] Using kibanaIndexMode: 'shared_ops'
[2020-03-16T12:52:24,361][INFO ][c.f.s.SearchGuardPlugin  ] FLS/DLS valve not bound (noop) due to java.lang.ClassNotFoundException: com.floragunn.searchguard.configuration.DlsFlsValveImpl
[2020-03-16T12:52:24,362][INFO ][c.f.s.SearchGuardPlugin  ] Auditlog not available due to java.lang.ClassNotFoundException: com.floragunn.searchguard.auditlog.impl.AuditLogImpl
[2020-03-16T12:52:24,364][INFO ][c.f.s.SearchGuardPlugin  ] Privileges interceptor not bound (noop) due to java.lang.ClassNotFoundException: com.floragunn.searchguard.configuration.PrivilegesInterceptorImpl
[2020-03-16T12:52:24,459][INFO ][o.e.d.DiscoveryModule    ] [elasticsearch-cdm-lsunfsza-1] using discovery type [zen]
[2020-03-16T12:52:25,042][INFO ][o.e.n.Node               ] [elasticsearch-cdm-lsunfsza-1] initialized
[2020-03-16T12:52:25,042][INFO ][o.e.n.Node               ] [elasticsearch-cdm-lsunfsza-1] starting ...
[2020-03-16T12:52:25,260][INFO ][o.e.t.TransportService   ] [elasticsearch-cdm-lsunfsza-1] publish_address {127.0.0.1:9300}, bound_addresses {[::]:9300}
[2020-03-16T12:52:25,273][INFO ][o.e.b.BootstrapChecks    ] [elasticsearch-cdm-lsunfsza-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-03-16T12:52:25,280][INFO ][c.f.s.c.IndexBaseConfigurationRepository] Check if .searchguard index exists ...
[2020-03-16T12:52:25,288][DEBUG][o.e.a.a.i.e.i.TransportIndicesExistsAction] [elasticsearch-cdm-lsunfsza-1] no known master node, scheduling a retry
[2020-03-16T12:52:55,308][WARN ][o.e.n.Node               ] [elasticsearch-cdm-lsunfsza-1] timed out while waiting for initial discovery state - timeout: 30s
[2020-03-16T12:52:55,320][INFO ][c.f.s.h.SearchGuardHttpServerTransport] [elasticsearch-cdm-lsunfsza-1] publish_address {127.0.0.1:9200}, bound_addresses {[::]:9200}
[2020-03-16T12:52:55,320][INFO ][o.e.n.Node               ] [elasticsearch-cdm-lsunfsza-1] started
[2020-03-16T12:53:01,341][WARN ][o.e.d.z.ZenDiscovery     ] [elasticsearch-cdm-lsunfsza-1] not enough master nodes discovered during pinging (found [[Candidate{node={elasticsearch-cdm-lsunfsza-1}{365oza9eQFG062wpi8LL0Q}{5-mv6CA2QqSJhRlDM4RCsQ}{127.0.0.1}{127.0.0.1:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-03-16T12:53:08,439][ERROR][c.f.s.a.BackendRegistry  ] Not yet initialized (you may need to run sgadmin)
[2020-03-16T12:53:21,499][ERROR][c.f.s.a.BackendRegistry  ] Not yet initialized (you may need to run sgadmin)
[2020-03-16T12:53:25,296][DEBUG][o.e.a.a.i.e.i.TransportIndicesExistsAction] [elasticsearch-cdm-lsunfsza-1] timed out while retrying [indices:admin/exists] after failure (timeout [1m])
[2020-03-16T12:53:25,297][ERROR][c.f.s.c.IndexBaseConfigurationRepository] Failure while checking MasterNotDiscoveredException[null] index .searchguard
org.elasticsearch.discovery.MasterNotDiscoveredException: null
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:209) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:311) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:238) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.cluster.service.ClusterService$NotifyTimeout.run(ClusterService.java:1056) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:576) [elasticsearch-5.6.16.redhat-2.jar:5.6.16.redhat-2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_242]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_242]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
[2020-03-16T12:53:25,304][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [elasticsearch-cdm-lsunfsza-1] no known master node, scheduling a retry
[2020-03-16T12:53:38,404][ERROR][c.f.s.a.BackendRegistry  ] Not yet initialized (you may need to run sgadmin)
[2020-03-16T12:53:51,499][ERROR][c.f.s.a.BackendRegistry  ] Not yet initialized (you may need to run sgadmin)
[2020-03-16T12:53:55,305][DEBUG][o.e.a.a.c.h.TransportClusterHealthAction] [elasticsearch-cdm-lsunfsza-1] timed out while retrying [cluster:monitor/health] after failure (timeout [30s])
[2020-03-16T12:53:55,306][WARN ][c.f.s.c.IndexBaseConfigurationRepository] index '.searchguard' not healthy yet, we try again ... (Reason: no response)


Test 1:
oc get pods -o name --selector component=elasticsearch
pod/elasticsearch-cdm-lsunfsza-1-f5b49bd95-zgb2c
pod/elasticsearch-cdm-lsunfsza-2-5b948fdf46-rjx88
pod/elasticsearch-cdm-lsunfsza-3-754444b8c9-jrnqr

oc exec pod/elasticsearch-cdm-lsunfsza-1-f5b49bd95-zgb2c -- netstat -anp
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp6       0      0 :::9200                 :::*                    LISTEN      1/java              
tcp6       0      0 :::9300                 :::*                    LISTEN      1/java              
tcp6       0      0 :::4180                 :::*                    LISTEN      -                   
tcp6       0      0 :::60000                :::*                    LISTEN      -                   
tcp6       0      0 ::1:52510               ::1:9200                ESTABLISHED -                   
tcp6       0      0 fd01::3:9846:fcff:60000 fd01::4:9846:fcff:53208 ESTABLISHED -                   
tcp6       0      0 ::1:9200                ::1:52510               ESTABLISHED 1/java              
tcp6       0      1 fd01::3:9846:fcff:42702 fd02::1a7e:9300         SYN_SENT    1/java              
tcp6       0      0 fd01::3:9846:fcff:60000 fd01::2:9846:fcff:37802 ESTABLISHED -                   

oc exec pod/elasticsearch-cdm-lsunfsza-2-5b948fdf46-rjx88 -- netstat -anp
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp6       0      0 :::60000                :::*                    LISTEN      -                   
tcp6       0      0 :::9200                 :::*                    LISTEN      1/java              
tcp6       0      0 :::9300                 :::*                    LISTEN      1/java              
tcp6       0      0 :::4180                 :::*                    LISTEN      -                   
tcp6       0      0 fd01::2:9846:fcff:60000 fd01::4:9846:fcff:33088 ESTABLISHED -                   
tcp6       0      0 fd01::2:9846:fcff:60000 fd01::2:9846:fcff:59186 ESTABLISHED -                   
tcp6       0      0 fd01::2:9846:fcff::9300 fd01::3:9846:fcff:42760 ESTABLISHED 1/java              
tcp6       0      0 ::1:9200                ::1:39978               ESTABLISHED 1/java              
tcp6       0      0 ::1:39978               ::1:9200                ESTABLISHED -                   

oc exec pod/elasticsearch-cdm-lsunfsza-3-754444b8c9-jrnqr -- netstat -anp
Error from server: error dialing backend: remote error: tls: internal error



Test 2:

oc get pods -o name --selector component=elasticsearch
elasticsearch-cdm-lsunfsza-1-f5b49bd95-wrd9n    1/2     Running   0          7m33s
elasticsearch-cdm-lsunfsza-2-5b948fdf46-fs7lx   1/2     Running   0          7m33s
elasticsearch-cdm-lsunfsza-3-754444b8c9-xv9vv   1/2     Running   0          7m33s

oc exec -c elasticsearch pod/elasticsearch-cdm-lsunfsza-1-f5b49bd95-wrd9n -- netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp6       0      0 :::9300                 :::*                    LISTEN      1/java              
tcp6       0      0 :::4180                 :::*                    LISTEN      -                   
tcp6       0      0 :::60000                :::*                    LISTEN      -                   
tcp6       0      0 fd01::6:9846:fcff:58770 fd02::1:443             ESTABLISHED -                   
tcp6       0      0 fd01::6:9846:fcff::9300 fd01::1:9846:fcff:46394 ESTABLISHED 1/java              
tcp6       0      0 fd01::6:9846:fcff::9300 fd01::1:9846:fcff:46232 ESTABLISHED 1/java              
tcp6       0      0 fd01::6:9846:fcff:60000 fd01::4:9846:fcff:59938 ESTABLISHED -                   
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  2      [ ]         STREAM     CONNECTED     82032718 1/java               
unix  2      [ ]         STREAM     CONNECTED     82029907 1/java               

oc exec -c elasticsearch pod/elasticsearch-cdm-lsunfsza-2-5b948fdf46-fs7lx -- netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp6       0      0 :::9300                 :::*                    LISTEN      1/java              
tcp6       0      0 :::4180                 :::*                    LISTEN      -                   
tcp6       0      0 :::60000                :::*                    LISTEN      -                   
tcp6       0      0 fd01::1:9846:fcff:46506 fd02::1a7e:9300         ESTABLISHED 1/java              
tcp6       0      0 fd01::1:9846:fcff:38662 fd02::1:443             ESTABLISHED -                   
tcp6       0      0 fd01::1:9846:fcff:60000 fd01::2:9846:fcff:46412 ESTABLISHED -                   
tcp6       0      0 fd01::1:9846:fcff:60000 fd01::4:9846:fcff:57452 ESTABLISHED -                   
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  2      [ ]         STREAM     CONNECTED     185357855 1/java               
unix  2      [ ]         STREAM     CONNECTED     185357570 1/java               

oc exec -c elasticsearch pod/elasticsearch-cdm-lsunfsza-3-754444b8c9-xv9vv -- netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp6       0      0 :::9300                 :::*                    LISTEN      1/java              
tcp6       0      0 :::4180                 :::*                    LISTEN      -                   
tcp6       0      0 :::60000                :::*                    LISTEN      -                   
tcp6       0      0 fd01::7:9846:fcff:60000 fd01::2:9846:fcff:42628 ESTABLISHED -                   
tcp6       0      0 fd01::7:9846:fcff:60000 fd01::4:9846:fcff:33630 ESTABLISHED -                   
tcp6       0      0 fd01::7:9846:fcff:59204 fd02::1a7e:9300         ESTABLISHED 1/java              
tcp6       0      0 fd01::7:9846:fcff:38438 fd02::1:443             ESTABLISHED -                   
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  2      [ ]         STREAM     CONNECTED     71280121 1/java               
unix  2      [ ]         STREAM     CONNECTED     71276944 1/java

Comment 16 Anping Li 2020-03-16 13:12:26 UTC
scenarios 2:

network.host: _share_


oc get pods --selector component=elasticsearch
elasticsearch-cdm-lsunfsza-1-f5b49bd95-ttcgv    1/2     CrashLoopBackOff   3          119s
elasticsearch-cdm-lsunfsza-2-5b948fdf46-hcntm   1/2     CrashLoopBackOff   3          119s
elasticsearch-cdm-lsunfsza-3-754444b8c9-fm4jt   1/2     CrashLoopBackOff   3          118s


[anli@preserve-docker-slave virt]$ oc logs elasticsearch-cdm-lsunfsza-1-f5b49bd95-ttcgv -c elasticsearch
[2020-03-16 13:08:48,643][INFO ][container.run            ] Begin Elasticsearch startup script
[2020-03-16 13:08:48,648][INFO ][container.run            ] Comparing the specified RAM to the maximum recommended for Elasticsearch...
[2020-03-16 13:08:48,649][INFO ][container.run            ] Inspecting the maximum RAM available...
[2020-03-16 13:08:48,652][INFO ][container.run            ] ES_JAVA_OPTS: ' -Xms1024m -Xmx1024m'
[2020-03-16 13:08:48,654][INFO ][container.run            ] Copying certs from /etc/openshift/elasticsearch/secret to /etc/elasticsearch/secret
[2020-03-16 13:08:48,660][INFO ][container.run            ] Building required jks files and truststore
Importing keystore /etc/elasticsearch/secret/admin.p12 to /etc/elasticsearch/secret/admin.jks...
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/logging-es.jks -destkeystore /etc/elasticsearch/secret/logging-es.jks -deststoretype pkcs12".
Certificate was added to keystore
Certificate was added to keystore
[2020-03-16 13:08:51,473][INFO ][container.run            ] Setting heap dump location /elasticsearch/persistent/heapdump.hprof
[2020-03-16 13:08:51,474][INFO ][container.run            ] Checking if Elasticsearch is ready
[2020-03-16 13:08:51,475][INFO ][container.run            ] ES_JAVA_OPTS: ' -Xms1024m -Xmx1024m -XX:HeapDumpPath=/elasticsearch/persistent/heapdump.hprof -Dsg.display_lic_none=false -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.type=unpooled'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N

### LICENSE NOTICE Search Guard ###

If you use one or more of the following features in production
make sure you have a valid Search Guard license
(See https://floragunn.com/searchguard-validate-license)

* Kibana Multitenancy
* LDAP authentication/authorization
* Active Directory authentication/authorization
* REST Management API
* JSON Web Token (JWT) authentication/authorization
* Kerberos authentication/authorization
* Document- and Fieldlevel Security (DLS/FLS)
* Auditlogging

In case of any doubt mail to <sales@floragunn.com>
###################################

### LICENSE NOTICE Search Guard ###

If you use one or more of the following features in production
make sure you have a valid Search Guard license
(See https://floragunn.com/searchguard-validate-license)

* Kibana Multitenancy
* LDAP authentication/authorization
* Active Directory authentication/authorization
* REST Management API
* JSON Web Token (JWT) authentication/authorization
* Kerberos authentication/authorization
* Document- and Fieldlevel Security (DLS/FLS)
* Auditlogging

In case of any doubt mail to <sales@floragunn.com>
###################################
Consider setting -Djdk.tls.rejectClientInitiatedRenegotiation=true to prevent DoS attacks through client side initiated TLS renegotiation.
Consider setting -Djdk.tls.rejectClientInitiatedRenegotiation=true to prevent DoS attacks through client side initiated TLS renegotiation.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

Comment 17 Jeff Cantrill 2020-03-16 18:56:50 UTC
(In reply to Anping Li from comment #16)
> scenarios 2:
> 
> network.host: _share_
> 

The valid value is '_site_'

Comment 18 Jeff Cantrill 2020-03-16 19:00:14 UTC
Can you please advise what config values map to which outputs in #c15

Comment 19 Anping Li 2020-03-17 01:56:45 UTC
Please ignore comment 16. How mess my mind is! 


Scenarios 2:
1) set network.host: _site_

2)oc get pods
NAME                                            READY   STATUS             RESTARTS   AGE
elasticsearch-cdm-lsunfsza-1-f5b49bd95-lbmg7    1/2     CrashLoopBackOff   5          6m17s
elasticsearch-cdm-lsunfsza-2-5b948fdf46-x2wcd   1/2     CrashLoopBackOff   5          6m17s
elasticsearch-cdm-lsunfsza-3-754444b8c9-bxs2f   1/2     CrashLoopBackOff   5          6m17s


$ oc logs elasticsearch-cdm-lsunfsza-1-f5b49bd95-lbmg7 -c proxy
2020/03/17 01:49:45 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-logging:elasticsearch
2020/03/17 01:49:45 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2020/03/17 01:49:45 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2020/03/17 01:49:45 oauthproxy.go:200: mapping path "/" => upstream "https://localhost:9200/"
2020/03/17 01:49:45 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-logging:elasticsearch
2020/03/17 01:49:45 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
2020/03/17 01:49:45 oauthproxy.go:249: WARN: Configured to pass client specified bearer token upstream.
      Only use this option if you're sure that the upstream will not leak or abuse the given token.
      Bear in mind that the token could be a long lived token or hard to revoke.
2020/03/17 01:49:45 http.go:61: HTTP: listening on :4180
2020/03/17 01:49:45 http.go:107: HTTPS: listening on [::]:60000
I0317 01:49:45.734189       1 dynamic_serving_content.go:129] Starting serving::/etc/proxy/secrets/tls.crt::/etc/proxy/secrets/tls.key
2020/03/17 01:49:49 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9200: connect: connection refused
2020/03/17 01:50:00 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9200: connect: connection refused
2020/03/17 01:50:19 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9200: connect: connection refused
2020/03/17 01:50:30 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9200: connect: connection refused
2020/03/17 01:50:49 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9200: connect: connection refused
2020/03/17 01:51:00 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9200: connect: connection refused
2020/03/17 01:51:19 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9200: connect: connection refused
2020/03/17 01:51:30 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9200: connect: connection refused

$ oc logs elasticsearch-cdm-lsunfsza-1-f5b49bd95-lbmg7 -c elasticsearch
[2020-03-17 01:51:58,648][INFO ][container.run            ] Begin Elasticsearch startup script
[2020-03-17 01:51:58,652][INFO ][container.run            ] Comparing the specified RAM to the maximum recommended for Elasticsearch...
[2020-03-17 01:51:58,653][INFO ][container.run            ] Inspecting the maximum RAM available...
[2020-03-17 01:51:58,656][INFO ][container.run            ] ES_JAVA_OPTS: ' -Xms1024m -Xmx1024m'
[2020-03-17 01:51:58,658][INFO ][container.run            ] Copying certs from /etc/openshift/elasticsearch/secret to /etc/elasticsearch/secret
[2020-03-17 01:51:58,665][INFO ][container.run            ] Building required jks files and truststore
Importing keystore /etc/elasticsearch/secret/admin.p12 to /etc/elasticsearch/secret/admin.jks...
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/admin.jks -destkeystore /etc/elasticsearch/secret/admin.jks -deststoretype pkcs12".

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/admin.jks -destkeystore /etc/elasticsearch/secret/admin.jks -deststoretype pkcs12".
Certificate was added to keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/admin.jks -destkeystore /etc/elasticsearch/secret/admin.jks -deststoretype pkcs12".
Importing keystore /etc/elasticsearch/secret/elasticsearch.p12 to /etc/elasticsearch/secret/elasticsearch.jks...
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/elasticsearch.jks -destkeystore /etc/elasticsearch/secret/elasticsearch.jks -deststoretype pkcs12".

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/elasticsearch.jks -destkeystore /etc/elasticsearch/secret/elasticsearch.jks -deststoretype pkcs12".
Certificate was added to keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/elasticsearch.jks -destkeystore /etc/elasticsearch/secret/elasticsearch.jks -deststoretype pkcs12".
Importing keystore /etc/elasticsearch/secret/logging-es.p12 to /etc/elasticsearch/secret/logging-es.jks...
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/logging-es.jks -destkeystore /etc/elasticsearch/secret/logging-es.jks -deststoretype pkcs12".

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/logging-es.jks -destkeystore /etc/elasticsearch/secret/logging-es.jks -deststoretype pkcs12".
Certificate was added to keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch/secret/logging-es.jks -destkeystore /etc/elasticsearch/secret/logging-es.jks -deststoretype pkcs12".
Certificate was added to keystore
Certificate was added to keystore
[2020-03-17 01:52:01,474][INFO ][container.run            ] Setting heap dump location /elasticsearch/persistent/heapdump.hprof
[2020-03-17 01:52:01,475][INFO ][container.run            ] Checking if Elasticsearch is ready
[2020-03-17 01:52:01,476][INFO ][container.run            ] ES_JAVA_OPTS: ' -Xms1024m -Xmx1024m -XX:HeapDumpPath=/elasticsearch/persistent/heapdump.hprof -Dsg.display_lic_none=false -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.type=unpooled'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N

### LICENSE NOTICE Search Guard ###

If you use one or more of the following features in production
make sure you have a valid Search Guard license
(See https://floragunn.com/searchguard-validate-license)

* Kibana Multitenancy
* LDAP authentication/authorization
* Active Directory authentication/authorization
* REST Management API
* JSON Web Token (JWT) authentication/authorization
* Kerberos authentication/authorization
* Document- and Fieldlevel Security (DLS/FLS)
* Auditlogging

In case of any doubt mail to <sales@floragunn.com>
###################################

### LICENSE NOTICE Search Guard ###

If you use one or more of the following features in production
make sure you have a valid Search Guard license
(See https://floragunn.com/searchguard-validate-license)

* Kibana Multitenancy
* LDAP authentication/authorization
* Active Directory authentication/authorization
* REST Management API
* JSON Web Token (JWT) authentication/authorization
* Kerberos authentication/authorization
* Document- and Fieldlevel Security (DLS/FLS)
* Auditlogging

In case of any doubt mail to <sales@floragunn.com>
###################################
Consider setting -Djdk.tls.rejectClientInitiatedRenegotiation=true to prevent DoS attacks through client side initiated TLS renegotiation.
Consider setting -Djdk.tls.rejectClientInitiatedRenegotiation=true to prevent DoS attacks through client side initiated TLS renegotiation.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

Comment 20 Anping Li 2020-03-17 02:16:18 UTC
(In reply to Jeff Cantrill from comment #18)
> Can you please advise what config values map to which outputs in #c15

The 'error:' is the elasticsearch.logs from one elasticsearch pod in test2.  the elasticsearch.yaml is as following:


cluster:
  name: ${CLUSTER_NAME}

script:
  inline: true
  stored: true

node:
  name: ${DC_NAME}
  master: ${IS_MASTER}
  data: ${HAS_DATA}
  max_local_storage_nodes: 1

network:
  host: 0.0.0.0

discovery.zen:
  ping.unicast.hosts: elasticsearch-cluster.openshift-logging.svc
  minimum_master_nodes: 2

gateway:
  recover_after_nodes: 2
  expected_nodes: 3
  recover_after_time: ${RECOVER_AFTER_TIME}

io.fabric8.elasticsearch.kibana.mapping.app: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json
io.fabric8.elasticsearch.kibana.mapping.ops: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json
io.fabric8.elasticsearch.kibana.mapping.empty: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json

openshift.config:
  use_common_data_model: true
  project_index_prefix: "project"
  time_field_name: "@timestamp"

openshift.searchguard:
  keystore.path: /etc/elasticsearch/secret/admin.jks
  truststore.path: /etc/elasticsearch/secret/searchguard.truststore

openshift.kibana.index.mode: shared_ops

path:
  data: /elasticsearch/persistent/${CLUSTER_NAME}/data
  logs: /elasticsearch/persistent/${CLUSTER_NAME}/logs

searchguard:
  authcz.admin_dn:
  - CN=system.admin,OU=OpenShift,O=Logging
  config_index_name: ".searchguard"
  ssl:
    transport:
      enabled: true
      enforce_hostname_verification: false
      keystore_type: JKS
      keystore_filepath: /etc/elasticsearch/secret/searchguard.key
      keystore_password: kspass
      truststore_type: JKS
      truststore_filepath: /etc/elasticsearch/secret/searchguard.truststore
      truststore_password: tspass
    http:
      enabled: true
      keystore_type: JKS
      keystore_filepath: /etc/elasticsearch/secret/key
      keystore_password: kspass
      clientauth_mode: OPTIONAL
      truststore_type: JKS
      truststore_filepath: /etc/elasticsearch/secret/truststore
      truststore_password: tspasssh-4.2

Comment 21 Anping Li 2020-03-26 15:45:26 UTC
To unblocked the 4.4, verified using internal builds.

quay.io/openshift/origin-cluster-logging-operator:latest
quay.io/openshift/origin-elasticsearch-operator:latest
quay.io/openshift/origin-elasticsearch-proxy:latest
registry.svc.ci.openshift.org/origin/4.5:logging-curator5
registry.svc.ci.openshift.org/origin/4.5:logging-elasticsearch6
registry.svc.ci.openshift.org/origin/4.5:logging-fluentd
registry.svc.ci.openshift.org/origin/4.5:logging-kibana6
registry.svc.ci.openshift.org/origin/4.5:oauth-proxy

Comment 23 errata-xmlrpc 2020-05-04 11:24:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.