Bug 1323689 - kibana-ops not working as expected
Summary: kibana-ops not working as expected
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Luke Meyer
QA Contact: chunchen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-04 12:38 UTC by Jaspreet Kaur
Modified: 2019-10-10 11:46 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-14 13:11:16 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Jaspreet Kaur 2016-04-04 12:38:47 UTC
Description of problem: It is expected that kibana-ops should show operations logs however it is not working as expected. It is unclear for how we can access operational logs and project separately.

The ui of kibana-ops doesn't show .operation.* and no operations logs are seen in it.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Followed the steps in documentation as below :

https://docs.openshift.com/enterprise/3.1/install_config/aggregate_logging.html#overview

oc project logging

2. mkdir /etc/origin/logging

# oadm ca create-server-cert --signer-cert=/etc/origin/master/ca.crt --signer-key=/etc/origin/master/ca.key  --signer-serial=/etc/origin/master/ca.serial.txt --hostnames='kibana-ops.apps.example.com' --cert=/etc/origin/logging/kibana-ops.crt --key=/etc/origin/logging/kibana-ops.key

oadm ca create-server-cert --signer-cert=/etc/origin/master/ca.crt --signer-key=/etc/origin/master/ca.key  --signer-serial=/etc/origin/master/ca.serial.txt --hostnames='kibana.apps.example.com' --cert=/etc/origin/logging/kibana.crt --key=/etc/origin/logging/kibana.key

# oc secrets new logging-deployer kibana.crt=/etc/origin/logging/kibana.crt kibana.key=/etc/origin/logging/kibana.key kibana-ops.crt=/etc/origin/logging/kibana-ops.crt  kibana-ops.key=/etc/origin/logging/kibana-ops.key


3. $ oc create -f - <<API
apiVersion: v1
kind: ServiceAccount
metadata:
  name: logging-deployer
secrets:
- name: logging-deployer
API

4. $ oc policy add-role-to-user edit \
            system:serviceaccount:logging:logging-deployer

5. $ oadm policy add-scc-to-user  \
    privileged system:serviceaccount:logging:aggregated-logging-fluentd

6. $ oadm policy add-cluster-role-to-user cluster-reader \
    system:serviceaccount:logging:aggregated-logging-fluentd

7. 
 oc process logging-deployer-template -n openshift \
           -v KIBANA_HOSTNAME=kibana.apps.example.com,ES_CLUSTER_SIZE=1,PUBLIC_MASTER_URL=https://master1.example.com:8443,ENABLE_OPS_CLUSTER=true,KIBANA_OPS_HOSTNAME=kibana-ops.apps.example.com\
           | oc create -f -

oc get pods
NAME                          READY     STATUS      RESTARTS   AGE
logging-deployer-31jks        0/1       Completed   0          47m
logging-es-b4iatq7j-1-pjx8q   1/1       Running     0          41m
logging-fluentd-1-m020i       1/1       Running     0          41m
logging-fluentd-1-p94g2       1/1       Running     0          41m
logging-kibana-1-rfgzm        2/2       Running     0          40m
[root@master1 ~]# 


 oc get route
NAME         HOST/PORT                     PATH      SERVICE              LABELS                                                       INSECURE POLICY   TLS TERMINATION
kibana       kibana.apps.example.com                 logging-kibana       component=support,logging-infra=support,provider=openshift                     passthrough
kibana-ops   kibana-ops.apps.example.com             logging-kibana-ops   component=support,logging-infra=support,provider=openshift                     passthrough





Actual results: Access  the route : kibana-ops.apps.example.com   with user having cluster-admin rights. It is always empty and not showing .operatons.*


Expected results: It should show operational logs.


Additional info:

Comment 1 Rich Megginson 2016-04-05 15:42:25 UTC
The output looks like there was some problem deploying with the OPS cluster:

oc get pods
NAME                          READY     STATUS      RESTARTS   AGE
logging-deployer-31jks        0/1       Completed   0          47m
logging-es-b4iatq7j-1-pjx8q   1/1       Running     0          41m
logging-fluentd-1-m020i       1/1       Running     0          41m
logging-fluentd-1-p94g2       1/1       Running     0          41m
logging-kibana-1-rfgzm        2/2       Running     0          40m

If you were using the OPS cluster you would have logging-es-ops-xxx and logging-kibana-ops-xxx pods running.

Let's take a look at the logs from the deployer:

# oc logs logging-deployer-31jks

Comment 2 Luke Meyer 2016-04-05 16:56:37 UTC
The deployer pod is probably gone by now. But yes, attach a log from the deployer if you can reproduce this. You should definitely be seeing -ops pods in the list. Otherwise I'd expect the ops service/route to give you a nice big 503 error, not a blank screen.

In addition to deployer logs, logs from fluentd would be helpful. Do you see anything when visiting the non-ops kibana?

One other question; is this a containerized install or an RPM-based install?

Comment 14 Jeff Cantrill 2016-04-14 13:11:16 UTC
Closing per https://bugzilla.redhat.com/show_bug.cgi?id=1323689#c13


Note You need to log in before you can comment on or make changes to this bug.