Bug 1813209
| Summary: | Kibana console isn't accessible when enabled https_proxy in the cluster. | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> | |
| Component: | Logging | Assignee: | ewolinet | |
| Status: | CLOSED ERRATA | QA Contact: | Qiaoling Tang <qitang> | |
| Severity: | high | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 4.4 | CC: | anli, aos-bugs, jcantril, mburke, periklis | |
| Target Milestone: | --- | |||
| Target Release: | 4.5.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1818650 1818651 (view as bug list) | Environment: | ||
| Last Closed: | 2020-07-13 17:19:58 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1823606, 1826793 | |||
| Bug Blocks: | 1818650, 1818651 | |||
|
Comment 7
Michael Burke
2020-03-30 01:33:42 UTC
I did not have to redeploy cluster logging. I updated proxy/cluster to add some proxy information. Could not get into Kibana. I edited proxy/cluster to add `.apps.<cluster_name>.<base_domain>` and was able to access Kibana. Is that how this should work? Eric, how do we inject the trusted CA into the Kibana pod? I tried to delete the deploy/kibana, then wait for the new deploy/kibana to be created, the `.apps.<cluster_name>.<base_domain>` is added to the no_proxy and NO_PROXY env variables in the new deployment/kibana. The old deploy/kibana http://pastebin.test.redhat.com/850462 The new deploy/kibana http://pastebin.test.redhat.com/850465 >The trusted CA should be injected into a configmap for the kibana pod to consume and use.
How does the user do this?
>I tried to delete the deploy/kibana, then wait for the new deploy/kibana to be created, the `.apps.<cluster_name>.<base_domain>` is added to the no_proxy and NO_PROXY env variables in the new deployment/kibana. Right, the deployment needs to be updated. Was the operator not doing this in time? It did on my own local cluster... >How does the user do this? The user doesn't do that directly. They would be updating the additional trust bundle for the proxy/cluster object as described here [1]. [1] https://docs.openshift.com/container-platform/4.3/networking/configuring-a-custom-pki.html#nw-proxy-configure-object_configuring-a-custom-pki
> >I tried to delete the deploy/kibana, then wait for the new deploy/kibana to be created, the `.apps.<cluster_name>.<base_domain>` is added to the no_proxy and NO_PROXY env variables in the new deployment/kibana.
>
> Right, the deployment needs to be updated. Was the operator not doing this
> in time? It did on my own local cluster...
>
Yes, in my cluster, the CLO didn't update the deploy/kibana.
Michael, we should not be advising users to delete deployment objects. Qiaoling, were there any errors in your CLO pod? Something to indicate something was stalled/stuck? The updating of env vars is the same as any other updates the operator would do to the deployment (resources, image name, etc). Hi Eric, I opened a new bug to track the issue that the CLO couldn't update the deploy/kibana after the cluster proxy changed, here is the bug: https://bugzilla.redhat.com/show_bug.cgi?id=1823606 Blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1826793 because PR needs cherry-pick approval It looks like prior blockers have been verified, can we please retest this? I'll test it when I could launch a cluster with https_proxy, currently, my jobs are failed. I'll update the status when I finish my testing. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409 |