https://github.com/openshift/origin-web-console/pull/2832
https://github.com/openshift/openshift-ansible/pull/7234
Deployed with non-ops cluster, click "View Archive" link of pod under default project, it will navigate to "https://kibana-ops.apps.0301-746.qe.rhcloud.com/auth/token", which is not right. See the attached picture # oc get po NAME READY STATUS RESTARTS AGE logging-curator-1-86nj9 1/1 Running 0 11m logging-es-data-master-ld1ud705-1-ljhth 2/2 Running 0 11m logging-fluentd-5dvwd 1/1 Running 0 11m logging-fluentd-gwzh6 1/1 Running 0 11m logging-kibana-1-sztbk 2/2 Running 0 11m # oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD logging-kibana kibana.apps.0301-746.qe.rhcloud.com logging-kibana <all> reencrypt/Redirect None Env: # openshift version openshift v3.7.23 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.8 Logging component version: v3.7.35-1
Created attachment 1402340 [details] Click "View Archive" link in non-ops cluster, wrongly navigated to kibana-ops UI
Please provide the project annotations: `oc get project default -o yaml'. The annotations provide hints to the webconsole to properly build the url. If the new image was deployed without running ansible, you may not have executed the code to annotate the project. Additionally, please provide the version of the webconsole image so we can confirm it also has the code that consumes the annotation. Should be available in * v3.7.32-1
Deployed logging with enabled ops cluster and disabled ops cluster, there is not exception output in Kibana UI while using view archive link in pod log for default project # openshift version openshift v3.7.42 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.8 Images logging-curator/images/v3.7.42-2 logging-elasticsearch/images/v3.7.42-2 logging-kibana/images/v3.7.42-2 logging-fluentd/images/v3.7.42-2 logging-auth-proxy/images/v3.7.42-2 ********************oc get project default -o yaml******************** Disabled ops cluster # oc get project default -o yaml apiVersion: v1 kind: Project metadata: annotations: openshift.io/logging.data.prefix: .operations openshift.io/node-selector: "" openshift.io/sa.initialized-roles: "true" openshift.io/sa.scc.mcs: s0:c1,c0 openshift.io/sa.scc.supplemental-groups: 1000000000/10000 openshift.io/sa.scc.uid-range: 1000000000/10000 creationTimestamp: 2018-03-30T02:38:36Z name: default resourceVersion: "24066" selfLink: /oapi/v1/projects/default uid: 721944b6-33c3-11e8-a059-fa163e674f25 spec: finalizers: - kubernetes - openshift.io/origin status: phase: Active Enabled ops cluster # oc get project default -o yaml apiVersion: v1 kind: Project metadata: annotations: openshift.io/logging.data.prefix: .operations openshift.io/logging.ui.hostname: kibana-ops.apps.0329-z7y.qe.rhcloud.com openshift.io/node-selector: "" openshift.io/sa.initialized-roles: "true" openshift.io/sa.scc.mcs: s0:c1,c0 openshift.io/sa.scc.supplemental-groups: 1000000000/10000 openshift.io/sa.scc.uid-range: 1000000000/10000 creationTimestamp: 2018-03-30T02:38:36Z name: default resourceVersion: "29423" selfLink: /oapi/v1/projects/default uid: 721944b6-33c3-11e8-a059-fa163e674f25 spec: finalizers: - kubernetes - openshift.io/origin status: phase: Active ********************oc get project default -o yaml********************
There is problem with the following scenario 1. Deploy logging with enabled ops cluster, logging annotations will be added to default project, such as: metadata: annotations: openshift.io/logging.data.prefix: .operations openshift.io/logging.ui.hostname: kibana-ops.apps.0403-rh2.qe.rhcloud.com 2. Undeploy logging, logging annotations would be kept for default project metadata: annotations: openshift.io/logging.data.prefix: .operations openshift.io/logging.ui.hostname: kibana-ops.apps.0403-rh2.qe.rhcloud.com 3. Deploy logging with non-ops cluster, logging annotations is still metadata: annotations: openshift.io/logging.data.prefix: .operations openshift.io/logging.ui.hostname: kibana-ops.apps.0403-rh2.qe.rhcloud.com If we click "View Archive" link in pod log for default project, it will navigate to kibana-ops UI, but there is not such service, so will throw out error which indicate application is not applicable. There is not such issue if we deploy logging with non-ops cluster and then deploy logging with enabled ops cluster I think there should be one fix in openshift-ansible side, all logging annotations should be deleted after undeployment.
Set to VERIFIED, issue in Comment 11 is tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1563490
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0636