Bug 1452807 - kibana 3.6.0 in CrashLoopBackOff after install
Summary: kibana 3.6.0 in CrashLoopBackOff after install
Keywords:
Status: CLOSED DUPLICATE of bug 1439451
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.6.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Jeff Cantrill
QA Contact: Xia Zhao
URL:
Whiteboard: aos-scalability-36
: 1458652 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-19 16:53 UTC by Mike Fiedler
Modified: 2017-06-19 21:47 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-19 21:47:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logging-kibana dc (4.57 KB, text/plain)
2017-05-22 15:20 UTC, Mike Fiedler
no flags Details

Description Mike Fiedler 2017-05-19 16:53:51 UTC
Description of problem:

With the fix to https://bugzilla.redhat.com/show_bug.cgi?id=1439451 I no longer see the message "Could not read TLS opts from secret/server-tls.json; error was: Error: ENOENT: no such file or directory, open 'secret/server-tls.json'" for kibana-proxy.

However, the kibana-proxy container is still failing.   The logs:

root@ip-172-31-35-145: ~ # oc logs logging-kibana-1-c2rrp -c kibana-proxy
Starting up the proxy with auth mode "oauth2" and proxy transform "user_header,token_header".

The events are below in Additional Info


Version-Release number of selected component (if applicable):  OCP 3.6.75 and openshift-ansible 3.6.68.   Kibana is from registry.ops.openshift.com:

registry.ops.openshift.com/openshift3/logging-kibana              3.6.0               925583fe8c13        6 weeks ago         342.9 MB


How reproducible: Always - deploy logging 3.6.0 from registry.ops.openshift.com


Additional info:

59m       59m       1         logging-kibana-1-422p9         Pod                                                              Normal    Scheduled                     default-scheduler                                      Successfully assigned logging-
kibana-1-422p9 to ip-172-31-36-229.us-west-2.compute.internal
59m       59m       1         logging-kibana-1-422p9         Pod                     spec.containers{kibana}                  Normal    Pulling                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   pulling image "registry.ops.op
enshift.com/openshift3/logging-kibana:3.6.0"
59m       59m       1         logging-kibana-1-422p9         Pod                     spec.containers{kibana}                  Normal    Pulled                        kubelet, ip-172-31-36-229.us-west-2.compute.internal   Successfully pulled image "reg
istry.ops.openshift.com/openshift3/logging-kibana:3.6.0"
59m       59m       1         logging-kibana-1-422p9         Pod                     spec.containers{kibana}                  Normal    Created                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   Created container with id 4029
0348dd2b177ea0a7eecd035db85ed6b22b5107ebb3550c96f99b95a34109
59m       59m       1         logging-kibana-1-422p9         Pod                     spec.containers{kibana}                  Normal    Started                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   Started container with id 4029
0348dd2b177ea0a7eecd035db85ed6b22b5107ebb3550c96f99b95a34109
26m       59m       12        logging-kibana-1-422p9         Pod                     spec.containers{kibana-proxy}            Normal    Pulling                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   pulling image "registry.ops.op
enshift.com/openshift3/logging-auth-proxy:3.6.0"
26m       59m       12        logging-kibana-1-422p9         Pod                     spec.containers{kibana-proxy}            Normal    Pulled                        kubelet, ip-172-31-36-229.us-west-2.compute.internal   Successfully pulled image "reg
istry.ops.openshift.com/openshift3/logging-auth-proxy:3.6.0"
59m       59m       1         logging-kibana-1-422p9         Pod                     spec.containers{kibana-proxy}            Normal    Created                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   Created container with id 7e4d
5013a7cdc539d496cbee6fea4a15b454fe7f548dcd0a51fb9ceeb8e8aa3f
59m       59m       1         logging-kibana-1-422p9         Pod                     spec.containers{kibana-proxy}            Normal    Started                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   Started container with id 7e4d
5013a7cdc539d496cbee6fea4a15b454fe7f548dcd0a51fb9ceeb8e8aa3f
58m       58m       1         logging-kibana-1-422p9         Pod                     spec.containers{kibana-proxy}            Normal    Created                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   Created container with id 2311
01e65d117a743589c36ecc6c9138a5fe74af23c8b75c19cc4b67e4e2892d
58m       58m       1         logging-kibana-1-422p9         Pod                     spec.containers{kibana-proxy}            Normal    Started                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   Started container with id 2311
01e65d117a743589c36ecc6c9138a5fe74af23c8b75c19cc4b67e4e2892d
23m       58m       149       logging-kibana-1-422p9         Pod                     spec.containers{kibana-proxy}            Warning   BackOff                       kubelet, ip-172-31-36-229.us-west-2.compute.internal   Back-off restarting failed con
tainer
58m       58m       1         logging-kibana-1-422p9         Pod                                                              Warning   FailedSync                    kubelet, ip-172-31-36-229.us-west-2.compute.internal   Error syncing pod, skipping: f
ailed to "StartContainer" for "kibana-proxy" with CrashLoopBackOff: "Back-off 10s restarting failed container=kibana-proxy pod=logging-kibana-1-422p9_logging(712034b2-3caa-11e7-a135-0209e26a0204)"
58m       58m       1         logging-kibana-1-422p9   Pod       spec.containers{kibana-proxy}   Normal    Created      kubelet, ip-172-31-36-229.us-west-2.compute.internal   Created container with id cd485a14e77e952399f87ef1d6bdef7d5e36cff97b94080149
e93f3ab0378056
58m       58m       1         logging-kibana-1-422p9   Pod       spec.containers{kibana-proxy}   Normal    Started      kubelet, ip-172-31-36-229.us-west-2.compute.internal   Started container with id cd485a14e77e952399f87ef1d6bdef7d5e36cff97b94080149
e93f3ab0378056
58m       58m       2         logging-kibana-1-422p9   Pod                                       Warning   FailedSync   kubelet, ip-172-31-36-229.us-west-2.compute.internal   Error syncing pod, skipping: failed to "StartContainer" for "kibana-proxy" w
ith CrashLoopBackOff: "Back-off 20s restarting failed container=kibana-proxy pod=logging-kibana-1-422p9_logging(712034b2-3caa-11e7-a135-0209e26a0204)"
57m       57m       1         logging-kibana-1-422p9   Pod       spec.containers{kibana-proxy}   Normal    Created      kubelet, ip-172-31-36-229.us-west-2.compute.internal   Created container with id 2ee776654726c2456bae5fe46d92c96e0abed4f790b1932993
1c3339904f1c58
57m       57m       1         logging-kibana-1-422p9   Pod       spec.containers{kibana-proxy}   Normal    Started      kubelet, ip-172-31-36-229.us-west-2.compute.internal   Started container with id 2ee776654726c2456bae5fe46d92c96e0abed4f790b1932993
1c3339904f1c58
57m       57m       3         logging-kibana-1-422p9   Pod                                       Warning   FailedSync   kubelet, ip-172-31-36-229.us-west-2.compute.internal   Error syncing pod, skipping: failed to "StartContainer" for "kibana-proxy" w
ith CrashLoopBackOff: "Back-off 40s restarting failed container=kibana-proxy pod=logging-kibana-1-422p9_logging(712034b2-3caa-11e7-a135-0209e26a0204)"

Comment 1 Mike Fiedler 2017-05-19 17:24:55 UTC
Retried with the latest kibana and hit the same issue.

registry.ops.openshift.com/openshift3/logging-kibana              v3.6.78             27651533218a        24 hours ago        342.4 MB

Comment 2 Jeff Cantrill 2017-05-22 14:26:24 UTC
Can you please attach:

1. Logs from the kibana-proxy container
2. The DC for Kibana.

I wonder if  you are hitting the 'memory issue' where we now specify memory limits for the container but they are missing from the DC.  Also recently fixed https://github.com/fabric8io/openshift-auth-proxy/pull/15 which needs to be brought in.

Comment 3 Mike Fiedler 2017-05-22 15:19:35 UTC
1.  kibana-proxy log:

oc logs -f logging-kibana-1-k6z7t -c kibana-proxy

Starting up the proxy with auth mode "oauth2" and proxy transform "user_header,token_header".

That's it

2.  logging-kibana DC is attached.

Comment 4 Mike Fiedler 2017-05-22 15:20:01 UTC
Created attachment 1281116 [details]
logging-kibana dc

Comment 5 Mike Fiedler 2017-05-22 15:37:04 UTC
kibana-proxy status in docker is Exited(139):

CONTAINER ID        IMAGE                                                                                                                              COMMAND                  CREATED             STATUS                       PORTS               NAMES
06a2397e27e5        registry.ops.openshift.com/openshift3/logging-auth-proxy@sha256:ebad0b5df67437be90273f93eba88e25ec7169646001aaf52b59244e62c1148c   "node /usr/lib/node_m"   3 minutes ago       Exited (139) 3 minutes ago                       k8s_kibana-proxy_logging-kibana-1-k6z7t_logging_5a59319c-3f01-11e7-899a-02b78a55d244_8

Comment 9 Mike Fiedler 2017-06-02 15:05:18 UTC
Sorry for delay.   Tested with this image today but still seeing issue for kibana-proxy container.   Here is the log:

root@ip-172-31-11-214: ~ # oc logs logging-kibana-2-6dqdd -c kibana-proxy
+ BYTES_PER_MEG=1048576
+ BYTES_PER_GIG=1073741824
+ DEFAULT_MIN=67108864
+ export NODE_OPTIONS=
+ echo 100663296
+ grep -qE ^([[:digit:]]+)([GgMm])?i?$
+ echo 100663296
+ grep -oE ^[[:digit:]]+
+ num=100663296
+ echo 100663296
+ grep -oE [GgMm]
+ echo 
Using NODE_OPTIONS: '--max-old-space-size=96' Memory setting is in MB
Running from directory: '/opt/openshift-auth-proxy'
+ unit=
+ [  = G ]
+ [  = g ]
+ [  = M ]
+ [  = m ]
+ [ 100663296 -lt 67108864 ]
+ NODE_OPTIONS=--max-old-space-size=96
+ export NODE_OPTIONS
+ cd /opt/openshift-auth-proxy
+ echo Using NODE_OPTIONS: '--max-old-space-size=96' Memory setting is in MB
+ pwd
+ echo Running from directory: '/opt/openshift-auth-proxy'
+ exec node --max-old-space-size=96 /usr/local/bin/npm start
> openshift-auth-proxy.24 start /opt/openshift-auth-proxy
> node openshift-auth-proxy.js
Could not read TLS opts from /secret/server-tls.json; error was: Error: ENOENT, no such file or directory '/secret/server-tls.json'
Starting up the proxy with auth mode "oauth2" and proxy transform "user_header,token_header".



Let me know if you want me to try anything else

Comment 10 Mike Fiedler 2017-06-02 15:09:00 UTC
The pod runs, but never goes ready and the route never accepts requests.
NAME                                      READY     STATUS    RESTARTS   AGE
logging-es-data-master-sd097zb6-1-08dmp   1/1       Running   0          1d
logging-fluentd-5791p                     1/1       Running   0          1d
logging-fluentd-9cz2c                     1/1       Running   0          1d
logging-fluentd-b16fr                     1/1       Running   0          1d
logging-fluentd-stb5v                     1/1       Running   0          1d
logging-kibana-2-6dqdd                    1/2       Running   0          6m
logging-kibana-2-deploy                   1/1       Running   0          6m

Comment 11 Jeff Cantrill 2017-06-02 17:14:23 UTC
I apologize for the back in forth, but by nature of the error message it appears the secret is not mounted into the pod.  Can you please review [1] and provide as much information as possible

1. What is the content of the kibana secrets?
2. Are you able to exec into or debug the pod to see if there is anything actually mounted in the expected directory
3. How was logging installed?  I would expect openshift-ansible to generate all the resources you require.

[1] https://github.com/jcantrill/origin-aggregated-logging/blob/1bbef826cb432f2a8c37577d1bd7f8fa52589e3b/docs/issues.md

Comment 12 Mike Fiedler 2017-06-02 17:28:36 UTC
I'll answer each question in its own comment

1. What is the content of the kibana secrets?

root@ip-172-31-11-214: ~ # oc get secret logging-kibana-proxy -o yaml
apiVersion: v1
data:
  oauth-secret: WE5qODBQeGFUUDlDa2psdXhKeDlpNmNJdHN2UjF3akVWME9OMWJBV3ZzUUkwcWV1VmFyWVhOd3FYRkNjaHlmSw==
  server-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURWVENDQWoyZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EVXpNVEUzTlRJMU5Gb1hEVEU1TURVek1URTNOVEkxTlZvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFEQ2NkNGVsa3UwRVQ4b2RqM0Z0WHhWazNYZFdxSTk2a1V0MHlXdzY2dzlnNTlmRTVmVk0xM2oKOU45TW8vVEZWbENpbUpJYTZwLzdCNUpYK29vRzVIamR3S3dqcjBSdEpWMzZPMVhWeEF2dFc1N3c3dUF3ZGd0UApHRGJWTWtpWmZScUZBcys5TGQ2Z2NQVklVMXkxOGdMQ1dKYzRQd3J2b1lTdnF6a0laZHBCVU9GY2U5ak5qYmZiCkRSYkJCVHFPVDZDNmZ2Wit3aS90VnU3Kys2RjhVWDhHaWZaQlgybnB4M3VZeXIvNW9nYmNLVmxzdXhSWDVFNHoKS0hiVisxdFBhV1VrMkpuc0cxb3ZQSEpRS3NVM0hhbmNud09XZVRjU0JUTTdKL1pKaEt3dTRxYS9KWUU3dHgwawoyNWtzMHgzdHFCUXo4LzhDQnU3YnRoRjdLZGI4T2JvakFnTUJBQUdqZ2FVd2dhSXdEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdiUVlEVlIwUkJHWXcKWklJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2g4Z2EybGlZVzVoTGpBMU16QXRkSE40TG5GbExuSm9ZMnh2ZFdRdVkyOXRnZ1pyCmFXSmhibUV3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQU1BZTd5cW1CMC91ZVN0NEg3QjlJeEhnQ0F4TWY2bjEKc3lWVmhaSHlqNW9NWDBsOXJRaVJhNlZmcEEyQjNaTGpocGprcS95UTB0ZDlCcElpUU02NElycEMxWnVVeCtyUwpNeEN0Zm9yTEFtUVh0c2FnNXJ6eGhPMFJac0RaYlVxMElSNllYczNxRGxlTnU4YlBabGV4SmN3WHRoN3J2TlRyCmYzODRDaE84Uk5PVm5UTGJuYUVDalJicjluOXJoSnJDdmxVcExodyt2RjZNUm8ySjVRZEd4dHlncVFWRlVVWGIKWnR0dVpidnZyQlp5UDMxQURBaENSa1FGTC9lK1ZhTWY0T21KSGxUc3lSTUNSNTZwNmNDVGVndUpKRVhGS29oOApiNXRuQWhuN1loWnFyL0loSGwxbEdvM0o1MzdnNjFIMHBYUjNnSHZZa3hIcVQ3ak5BbjNidkd3PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlDMmpDQ0FjS2dBd0lCQWdJQkFUQU5CZ2txaGtpRzl3MEJBUXNGQURBZU1Sd3dHZ1lEVlFRREV4TnNiMmRuCmFXNW5MWE5wWjI1bGNpMTBaWE4wTUI0WERURTNNRFV6TVRFM05USTFNRm9YRFRJeU1EVXpNREUzTlRJMU1Wb3cKSGpFY01Cb0dBMVVFQXhNVGJHOW5aMmx1WnkxemFXZHVaWEl0ZEdWemREQ0NBU0l3RFFZSktvWklodmNOQVFFQgpCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNb05vbHNpSVdscE1zVS9DYS9GVDM5L3JxWGREcjhVMUxjWFExQldNUmViCkNWQ2hWQTlKWi8xVWVvc1J4WmFDekw5d3JHOWMyTmhibjlLR3g1b0RJcUFDRG83aXZWYXlKV2RwSDd1Vzh4Zi8KMjd6V0pYaDQvcWVYR3g5SDhNUkluY0Z6RjhiYk1YSzRocGZnSVdMMWpHb2NNMDhCeDFSTDZCMi9nRElxRXluUwpLc2EyUm0rZFV4Rkx0WXFHOERSNzV5YUVsZmxOZG1jUXUwVzg4ZVJ4eE5nR3VydUpGL1QwbGJDRFpmVEhwNFhvCk5lSnUzRWV3d2ZqRTF0TkpSZEtDdjhneDN6Sko4MXBIaSt0WWJMcFNEeE5sLy9WZllGUngvVU1HS2RvV2dvRDkKZktWaHprbVRmT2JnTFY1K2ZSYnZDUm1oZmUvNUdaVTdsMFpvT2Uzci8xMENBd0VBQWFNak1DRXdEZ1lEVlIwUApBUUgvQkFRREFnS2tNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRndTCjhScEJia3lwM29pTVh2TEJqeWFZNmJadFlmNkRjZjlaSUQzakQwdGgzd3hyV0hyN0N0WnJ5S2FLVC8vWktzM0IKTjl0dUluQklPRU1GRXBBNkNZOEhUbVgvUUp3c005M080VUhJeHVUQ1l6b0pRaHNCV1ZxMjNnNzBJTEZmV294Uwp0Rkd0cjErTnBBS2pSK0xHVERyNGdOdnNCUGc3cVpydDF2bnBadXpqZytmZjhCSVJ0eGFkM3pwQzRyMElnWjVhCmpRUy9YcHFqeWttQS8xMlFXNjUzNGpqa0JIZjRiSmFNUW9Kb2FhTEhZQ1JXSjRlMUFVK1pzMysybHNDcFJQY1gKSjNpblFsL3c4Q0Y4UDJ3Z2EwVlJsNi93WVJFbTA4YW94d3dCL2E1akJWNkJkVkRaZ0dvcWlEcXJITTN5dDVnUAo2RmErejRxNmV3MXZ1V2RlbEYwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  server-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBd25IZUhwWkx0QkUvS0hZOXhiVjhWWk4xM1ZxaVBlcEZMZE1sc091c1BZT2ZYeE9YCjFUTmQ0L1RmVEtQMHhWWlFvcGlTR3VxZit3ZVNWL3FLQnVSNDNjQ3NJNjlFYlNWZCtqdFYxY1FMN1Z1ZThPN2cKTUhZTFR4ZzIxVEpJbVgwYWhRTFB2UzNlb0hEMVNGTmN0ZklDd2xpWE9EOEs3NkdFcjZzNUNHWGFRVkRoWEh2WQp6WTIzMncwV3dRVTZqaytndW43MmZzSXY3VmJ1L3Z1aGZGRi9Cb24yUVY5cDZjZDdtTXEvK2FJRzNDbFpiTHNVClYrUk9NeWgyMWZ0YlQybGxKTmlaN0J0YUx6eHlVQ3JGTngycDNKOERsbmszRWdVek95ZjJTWVNzTHVLbXZ5V0IKTzdjZEpOdVpMTk1kN2FnVU0vUC9BZ2J1MjdZUmV5blcvRG02SXdJREFRQUJBb0lCQVFDdGw5VHl2OEYwVUJWdgp2U2htOHlDK2tiaWZWd1FUZkt3b1BpS2ZNYmdDN3hpQVhGQ29NWVM0Tit5SFVyVDYzSlYrby9HRWdFVTFhc3dYCktZRENxSVRUak9qaHJ0N0xCcHBCQldvYlB6eGF1dnBLSlNrWGVydWI4SVU3anZuTHRpblA1L09vOUdPV0gyS00KSUloYmsyVXROc1JDbmQzWWsvMk9pN0dPTXNoSW1Dc2ZiMjE4UWRVNkNkL1E1L3J1bGFnWFBaVXoxaW1BTjlmbQorUnhTSFlFVUcwTVg1WnNSZWs4d21zSzBxK29KUFkvWklmTFhpSG82QjhCUUp5ZUdGMURTVjJSVEtlK0tIanNUCi9hSzh0VHh6akdVS2o2dWZSVzdiY3FHakZOOXVWVXVhdzl5STdIZUw5SG95OHI4ejB0SFlOeUNDYVpoUHJGUUkKbXo1VVhWREpBb0dCQU5pMkxxYnQwZERMUERJWnJaSnBycGxpdEhpMUxGOCtmb3FnTDNsVmJ3ZDB0cFB4QlhycgpOUzI4TlpJY3RTdkhMNlo4UUVtOWNQRm1QaXprN3dhcTdYWHJFSythYUdaclRudlZYeVBWRU93VDRhYkpqMGlvCnRVN3docC9QYUhsdXY2a1dZTFNteU5LcEVyMjdQanVtMzY2Y3VPbGdYMXpiNUxBODhRdmFNU1h0QW9HQkFPV3kKUWRDVVphVFlXVUhqMHdzdGZKTUdCQ3AzQjFaelJkUTB5ZzBOMkZMMVFVMUZtMDFZVnY5RExueTlvQlBmdHloWApZSEJETHBZeTlERjNUenRhQmp0N1NkUzh2TkRPUXNkcjIrenh6KytQUUlhSWZQM3RRWm5scnUrb0RjOUlwVldMCkRhdUg3VmtpaCtMamhHRVdDVk1rMkUrbjFRbEh0MGxKdmJXamFsNVBBb0dBQk9GMWc0VHZxTWdxL3VYZEp1TUMKYjZudGJwcUYrVThyQW14QkpYWnJIYnZmTU0zSTFjL2VUcjFpWjN3R0NJcGY1RndBQnFraGxnNDdjRDluc3JxKwp4bDBZN3h1SEptZGNTU1d4RXRtRm5BdUdsWDhNbnhKTm93MS91ckd0Sks3OTJnMEsrSWFaRjBWL2lvNWhCRzdwCnNzRU0yUlMya1J6U3RiVnBxRjZ0cExFQ2dZQktGNW9MUWhNWGZZSXRNdVFjc3V1QU1XeWVsZzZUNEZNaUJIVTQKaU1MQzM4SFV2eU05YThXRVNaTnhRV21sZjRDQlRzNFk2RkxhdUV6MHQ5dWk5WU1WSk12SUI2bVFZVGhCUTVXSgpkT2J5QzI5dzlnMzdpdENpWitocC9mZVdhWVNMZDNOTlpXYzJYV0VmMnV3VXRSc0U2dG1ydUNPTC9zb3NwZERBCkNwcUJHUUtCZ1FDZXgwTU5qL0RjVkhhdEE3NDgvUDVPNzZIYjV4aUlramxYZitOL1MvakZ6V1J5ejF3ZC83UzcKSUxwSC9hQi9rSTJsQUZBNHJndnlIN0NqdTM5UGpiai8vQ29ROEd1WU9mVExOQW1MQmVuY1hVeFFudGVQdDd6NQpyNVlhV0llUjBLT2xBamdicHkwWVZ3S0tZb3VBUVZoakJDS0w1dlg0SS93UVlKSlhSNU1zclE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
  server-tls: Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K
  session-secret: cUxqY2daWExLSW1XVUppUEJYd2p1WEFNcG4yUUQ3d0ZiTlFaWThNZ0pPcmN1ZGd5VnpQUGVhaWZkSnZGNU51bUxIblZJd0hGOG96NjFWQ0l3ZW51dlJOMGtSc1hHSk1aSkZ0MWpoQ2NxS2FIMFV3ZVdEU0xnTndFWWQ0WmpJd01keWtEdjBBUVJ1M3BkSFFkWmQ1SjRDMmp6OTJUYnlnVmRuTjRtbEdocGZWZTN6Y1RScHM3andpeGxOWWE3Qm9KRUFnbEQ4WEg=
kind: Secret
metadata:
  creationTimestamp: 2017-05-31T17:53:41Z
  name: logging-kibana-proxy
  namespace: logging
  resourceVersion: "41659"
  selfLink: /api/v1/namespaces/logging/secrets/logging-kibana-proxy
  uid: 15450164-462a-11e7-9eee-02992b36d57c


root@ip-172-31-11-214: ~ # oc get secret logging-kibana -o yaml
apiVersion: v1
data:
  ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EVXpNVEUzTlRJMU1Gb1hEVEl5TURVek1ERTNOVEkxTVZvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1vTm9sc2lJV2xwTXNVL0NhL0ZUMzkvcnFYZERyOFUxTGNYUTFCV01SZWIKQ1ZDaFZBOUpaLzFVZW9zUnhaYUN6TDl3ckc5YzJOaGJuOUtHeDVvRElxQUNEbzdpdlZheUpXZHBIN3VXOHhmLwoyN3pXSlhoNC9xZVhHeDlIOE1SSW5jRnpGOGJiTVhLNGhwZmdJV0wxakdvY00wOEJ4MVJMNkIyL2dESXFFeW5TCktzYTJSbStkVXhGTHRZcUc4RFI3NXlhRWxmbE5kbWNRdTBXODhlUnh4TmdHdXJ1SkYvVDBsYkNEWmZUSHA0WG8KTmVKdTNFZXd3ZmpFMXROSlJkS0N2OGd4M3pKSjgxcEhpK3RZYkxwU0R4TmwvL1ZmWUZSeC9VTUdLZG9XZ29EOQpmS1ZoemttVGZPYmdMVjUrZlJidkNSbWhmZS81R1pVN2wwWm9PZTNyLzEwQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGd1MKOFJwQmJreXAzb2lNWHZMQmp5YVk2Ylp0WWY2RGNmOVpJRDNqRDB0aDN3eHJXSHI3Q3RacnlLYUtULy9aS3MzQgpOOXR1SW5CSU9FTUZFcEE2Q1k4SFRtWC9RSndzTTkzTzRVSEl4dVRDWXpvSlFoc0JXVnEyM2c3MElMRmZXb3hTCnRGR3RyMStOcEFLalIrTEdURHI0Z052c0JQZzdxWnJ0MXZucFp1empnK2ZmOEJJUnR4YWQzenBDNHIwSWdaNWEKalFTL1hwcWp5a21BLzEyUVc2NTM0amprQkhmNGJKYU1Rb0pvYWFMSFlDUldKNGUxQVUrWnMzKzJsc0NwUlBjWApKM2luUWwvdzhDRjhQMndnYTBWUmw2L3dZUkVtMDhhb3h3d0IvYTVqQlY2QmRWRFpnR29xaURxckhNM3l0NWdQCjZGYSt6NHE2ZXcxdnVXZGVsRjA9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSVENDQWkyZ0F3SUJBZ0lCQXpBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EVXpNVEUzTlRNd01Gb1hEVEU1TURVek1URTNOVE13TUZvdwpSakVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVI0d0hBWURWUVFECkRCVnplWE4wWlcwdWJHOW5aMmx1Wnk1cmFXSmhibUV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXcKZ2dFS0FvSUJBUUMvYmF0aXVWVXllVU4xZlNEbGhlYXMvS0FlazBwVUJZY2FKZ00zL2JCSTUrZmNFTXhoakxZegpNR2dpVE1Ud3hrNzFuS2NWT1Q1UUthZklaOXhMeWtxQVlnNitwRjVQTDNKNTFLRnlxUzM2RERjNHUzOWhkcUtnCi9TbHhwYmVnYjBGdTFsRHR0UmFRTkxIZ2t4dEk0MHNwenp5ck5SNStwWXRDTHREUnVoV1M5ZjMrZnNVM2FaTUcKVlVYZ3VyQytGRlpvM0ZSL3ZuR0Z5NWhJd04yWG4vRVVsUVVDK0tXdko5QU5oWHhCeHVMRjNMUFR3cFBaK3ZudwprdzhEcnNVYUppRDJyQThZOHdmOUZ0VkYyMnFJeUF5Ky9FbUtOTGhWTG9wYnFqa3p2UWZPeWc1OUpDY0JEQUdXClQvYTNweUlOMVYxcFBFY0M5S1VMNEZFMVpMQjc0eEtuQWdNQkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUYKb0RBSkJnTlZIUk1FQWpBQU1CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFkQmdOVgpIUTRFRmdRVVdqaDBpbEd5ZW4vaTQwTlpVVVArVlhLeTJONHdDUVlEVlIwakJBSXdBREFOQmdrcWhraUc5dzBCCkFRVUZBQU9DQVFFQWFiNWl1SWlNdUdJc0R5aVhVK3A3dTVqRTlZQkQySUJsVE5nREszejBYSXlpVGl0d2ZoRnkKUVZxRC9xOC85OW53dWI1RkcrWmNqcFlOTnFtNTJETVYrdWVTS0FrcStRR2cyOWhkK0ZtQ05MMnltYm5MMFR5TQo4eHJLT3d0elFBclFMaitHMzA3cHVGNFBwRFRBaTRHQW5OODNkMFdBV3p3TURUdEM2UTlqVUpCUDVHMlp3YXpGCjc1R0lLTU5aWTloWTNkaWRFWFpMVGs0YlpmZ1hkb1JKVzc2UExYbGc4d2RhdWZDUkpRa2JtNmZ2b0NpcDFvZjIKUWNMWVVGNDR6MldBbDNTZVpYSXpSZm13TWowbi9HTlVEamJGSVZlSkozNmVHZm0wbXpiSGdPVmhpU3FTaThZMAp4cm5JcERITFl1SGZ0TmtSMWt3eWp4b1Rtckl1eWk1ZzRnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktZd2dnU2lBZ0VBQW9JQkFRQy9iYXRpdVZVeWVVTjEKZlNEbGhlYXMvS0FlazBwVUJZY2FKZ00zL2JCSTUrZmNFTXhoakxZek1HZ2lUTVR3eGs3MW5LY1ZPVDVRS2FmSQpaOXhMeWtxQVlnNitwRjVQTDNKNTFLRnlxUzM2RERjNHUzOWhkcUtnL1NseHBiZWdiMEZ1MWxEdHRSYVFOTEhnCmt4dEk0MHNwenp5ck5SNStwWXRDTHREUnVoV1M5ZjMrZnNVM2FaTUdWVVhndXJDK0ZGWm8zRlIvdm5HRnk1aEkKd04yWG4vRVVsUVVDK0tXdko5QU5oWHhCeHVMRjNMUFR3cFBaK3Zud2t3OERyc1VhSmlEMnJBOFk4d2Y5RnRWRgoyMnFJeUF5Ky9FbUtOTGhWTG9wYnFqa3p2UWZPeWc1OUpDY0JEQUdXVC9hM3B5SU4xVjFwUEVjQzlLVUw0RkUxClpMQjc0eEtuQWdNQkFBRUNnZ0VBUERIQU1zc1VmMHFpYTg5dENMK1NTZE1taG5iS2FLRlVXbVNabm9HbmJVVi8KSXpRbEVJZXV3Mm4xVk5QUEdlZEI3UG5Wa0ZidndVVlgvU3lybVNtRFE5dVJ4MkRvUnY0a2dTcmJtYktaUW9lVQoxY0lmekFZQ0haMTk2cjZ4ZjBGODBkMlNsU3pjYTN3bWN2ZlBISnhjaGtra1NySHBaT21wWUtaWUE1c0FMYldoCmIvYk5mcGRFbGN5MHNjQkJQWnZxdzMxT1JKUFdHU1JxY2lLdnhyRExQNjVLSUhFTUJ2M1d2bnhPdmVqdmRSRXkKSTd5TUFwMjErT0RzVzBUS3Jad2xNWXFDUmdKYW4wZGMrdnhwVHNrNlV5Z0dRbnM3eXYxaU9vRE95YnZjaVhRVgpyNkJMb2N2RkJLcHcyZTlCWnZQNzFETHlvbWxvVjNGczhFR0JkS1ZvQVFLQmdRRGRiZC8vdGthYW4yNlZ5MmFaCkxVZlNOMW93aWcwUnJsVzZwb2VDM1VPSWFqUlF1dkIwSHUyNnJBUUFRa2xQTUtrQ2M1YWNtMWsySERITGRVS00KaE54LzJ6d1JqVTdjdEdHc0d4MHNmQ2RmMFNyM0V1REZMT3pKa0gxV1IwNE1aMXMrU2NzQkFwdjNqQVE1VGtQLwp1Q2IxZmxIYXhKTHpGRFBieGtNcmhTY0ZtUUtCZ1FEZFVMZHk4L1dkTldUM3A4Qmd2Q25VR2ZSbmc1UnNBTjdECjZaMGJhQkZ6L1NRbzk4czRCY1RZTllyY1JNTDV4T1ZLOUVNNVBoMFdPM01XU1hDRkV6Y2NLcjdaa0ZxZThOVUMKTG1jaUZZcE5MRExMays0MUVVa2tSZmhNaUNDWDlZbDFINWpQVHoxNFdHb1V6NHdpMnBWVDhMSGtCeTBXN1FTbgpHa2ZidnBtQ1B3S0JnRFhuVTRwYWd5R05Ba3l3OFU4RXVPRXgzR0RJbXBuZFNMMWhZTWU0dVlIeDZMNW1ZN2JBCitMcGl1YTZlZEY0MHlFL3lkNDIwTzZseWY2UzU3UE5zUElsYmcybjZibUpIL3liNGlzZVRpYnBIbngvNmxvRXAKaUpNZys0SVBaYTZiVXBqOU9kQUxKSkRFb3hxWU5QR0JrT3BlVCtyanc2b3RGdHEvandaL0tacXhBb0dBV3hScgoydkFSaGJoQ3JEVXFVK1U4SmFEazEzRHNOU0tLaXYvcWV5dngrdFVUKzVRMjJ3QnN5VG9Id3F5OXZRTE9CbkhOCjlKSGVjSmJZdnpSTURVZ2lKd0prZHE4VXpGSjZweUlucVh4SjVZYXFCT1FGWld1T3VWSGVaTWlrK1VUQVpDWXoKd2lWdk84YlBLVzljMGI4NU0wbGNQR2JEcEtxNGZuaXZWL3p4dWdzQ2dZQW9rUFR3WmpYK0ErQS9xZ0t3MlNSTwpaUVZmR3VCRVBhYUNJQmw3M2luTS8wY1ROZXAwN1RMTFdLS2RBL3BERHNkRXVDaHp1Wi9LRW5nZ1BDYnNhVEtTCmV5OWtOMjVvT25iVFhERi9HUERNSXBEb2dHNTVSdStyRkxPazlzVXQ4V1d3ZnB6cUdtUGs4bkVMcmhQR1o1YjIKWDkyblpWcEpjbnYzM1ROQXo0Zlh4UT09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
kind: Secret
metadata:
  creationTimestamp: 2017-05-31T17:53:41Z
  name: logging-kibana
  namespace: logging
  resourceVersion: "41658"
  selfLink: /api/v1/namespaces/logging/secrets/logging-kibana
  uid: 14d3af76-462a-11e7-9eee-02992b36d57c
type: Opaque

Comment 13 Mike Fiedler 2017-06-02 17:35:03 UTC
2. Are you able to exec into or debug the pod to see if there is anything actually mounted in the expected directory

If I oc rsh into the kibana-proxy container and look in /secret, it contains:

$ ls -lrt
total 0
lrwxrwxrwx. 1 root root 21 Jun  2 17:30 session-secret -> ..data/session-secret
lrwxrwxrwx. 1 root root 17 Jun  2 17:30 server-tls -> ..data/server-tls
lrwxrwxrwx. 1 root root 17 Jun  2 17:30 server-key -> ..data/server-key
lrwxrwxrwx. 1 root root 18 Jun  2 17:30 server-cert -> ..data/server-cert
lrwxrwxrwx. 1 root root 19 Jun  2 17:30 oauth-secret -> ..data/oauth-secret

Note there is no server-tls.json as found in the error message.   Just server-tls

Comment 14 Mike Fiedler 2017-06-02 17:37:32 UTC
3. How was logging installed?  I would expect openshift-ansible to generate all the resources you require.

I installed with openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yaml using the following inventory:

[oo_first_master]
ip-172-31-11-214

[oo_first_master:vars]
openshift_deployment_type=openshift-enterprise
openshift_release=v3.6.0

openshift_logging_install_logging=true
openshift_logging_use_ops=false
openshift_logging_master_url=https://ec2-54-245-33-64.us-west-2.compute.amazonaws.com:8443
openshift_logging_master_public_url=https://ec2-54-245-33-64.us-west-2.compute.amazonaws.com:8443
openshift_logging_kibana_hostname=kibana.0530-tsx.qe.rhcloud.com
openshift_logging_namespace=logging
openshift_logging_image_prefix=registry.ops.openshift.com/openshift3/
openshift_logging_image_version=v3.6.79
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_size=25Gi
openshift_logging_fluentd_use_journal=true

Then, I swapped the kibana-proxy container with the one in comment 7

Comment 16 Xiaoli Tian 2017-06-14 08:59:14 UTC
*** Bug 1458652 has been marked as a duplicate of this bug. ***

Comment 17 Jan Wozniak 2017-06-14 11:12:32 UTC
this could be possibly relevant information https://bugzilla.redhat.com/show_bug.cgi?id=1439451#c57

Comment 18 Jeff Cantrill 2017-06-19 21:47:08 UTC
Closing duplicate.  Please reopen if you feel otherwise

*** This bug has been marked as a duplicate of bug 1439451 ***


Note You need to log in before you can comment on or make changes to this bug.