Bug 1323689

Summary: kibana-ops not working as expected
Product: OpenShift Container Platform Reporter: Jaspreet Kaur <jkaur>
Component: LoggingAssignee: Luke Meyer <lmeyer>
Status: CLOSED NOTABUG QA Contact: chunchen <chunchen>
Severity: high Docs Contact:
Priority: high    
Version: 3.1.0CC: aos-bugs, ewolinet, jcantril, jkaur, jnordell, lmeyer, misalunk, rmeggins, wsun
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-04-14 13:11:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Jaspreet Kaur 2016-04-04 12:38:47 UTC
Description of problem: It is expected that kibana-ops should show operations logs however it is not working as expected. It is unclear for how we can access operational logs and project separately.

The ui of kibana-ops doesn't show .operation.* and no operations logs are seen in it.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Followed the steps in documentation as below :

https://docs.openshift.com/enterprise/3.1/install_config/aggregate_logging.html#overview

oc project logging

2. mkdir /etc/origin/logging

# oadm ca create-server-cert --signer-cert=/etc/origin/master/ca.crt --signer-key=/etc/origin/master/ca.key  --signer-serial=/etc/origin/master/ca.serial.txt --hostnames='kibana-ops.apps.example.com' --cert=/etc/origin/logging/kibana-ops.crt --key=/etc/origin/logging/kibana-ops.key

oadm ca create-server-cert --signer-cert=/etc/origin/master/ca.crt --signer-key=/etc/origin/master/ca.key  --signer-serial=/etc/origin/master/ca.serial.txt --hostnames='kibana.apps.example.com' --cert=/etc/origin/logging/kibana.crt --key=/etc/origin/logging/kibana.key

# oc secrets new logging-deployer kibana.crt=/etc/origin/logging/kibana.crt kibana.key=/etc/origin/logging/kibana.key kibana-ops.crt=/etc/origin/logging/kibana-ops.crt  kibana-ops.key=/etc/origin/logging/kibana-ops.key


3. $ oc create -f - <<API
apiVersion: v1
kind: ServiceAccount
metadata:
  name: logging-deployer
secrets:
- name: logging-deployer
API

4. $ oc policy add-role-to-user edit \
            system:serviceaccount:logging:logging-deployer

5. $ oadm policy add-scc-to-user  \
    privileged system:serviceaccount:logging:aggregated-logging-fluentd

6. $ oadm policy add-cluster-role-to-user cluster-reader \
    system:serviceaccount:logging:aggregated-logging-fluentd

7. 
 oc process logging-deployer-template -n openshift \
           -v KIBANA_HOSTNAME=kibana.apps.example.com,ES_CLUSTER_SIZE=1,PUBLIC_MASTER_URL=https://master1.example.com:8443,ENABLE_OPS_CLUSTER=true,KIBANA_OPS_HOSTNAME=kibana-ops.apps.example.com\
           | oc create -f -

oc get pods
NAME                          READY     STATUS      RESTARTS   AGE
logging-deployer-31jks        0/1       Completed   0          47m
logging-es-b4iatq7j-1-pjx8q   1/1       Running     0          41m
logging-fluentd-1-m020i       1/1       Running     0          41m
logging-fluentd-1-p94g2       1/1       Running     0          41m
logging-kibana-1-rfgzm        2/2       Running     0          40m
[root@master1 ~]# 


 oc get route
NAME         HOST/PORT                     PATH      SERVICE              LABELS                                                       INSECURE POLICY   TLS TERMINATION
kibana       kibana.apps.example.com                 logging-kibana       component=support,logging-infra=support,provider=openshift                     passthrough
kibana-ops   kibana-ops.apps.example.com             logging-kibana-ops   component=support,logging-infra=support,provider=openshift                     passthrough





Actual results: Access  the route : kibana-ops.apps.example.com   with user having cluster-admin rights. It is always empty and not showing .operatons.*


Expected results: It should show operational logs.


Additional info:

Comment 1 Rich Megginson 2016-04-05 15:42:25 UTC
The output looks like there was some problem deploying with the OPS cluster:

oc get pods
NAME                          READY     STATUS      RESTARTS   AGE
logging-deployer-31jks        0/1       Completed   0          47m
logging-es-b4iatq7j-1-pjx8q   1/1       Running     0          41m
logging-fluentd-1-m020i       1/1       Running     0          41m
logging-fluentd-1-p94g2       1/1       Running     0          41m
logging-kibana-1-rfgzm        2/2       Running     0          40m

If you were using the OPS cluster you would have logging-es-ops-xxx and logging-kibana-ops-xxx pods running.

Let's take a look at the logs from the deployer:

# oc logs logging-deployer-31jks

Comment 2 Luke Meyer 2016-04-05 16:56:37 UTC
The deployer pod is probably gone by now. But yes, attach a log from the deployer if you can reproduce this. You should definitely be seeing -ops pods in the list. Otherwise I'd expect the ops service/route to give you a nice big 503 error, not a blank screen.

In addition to deployer logs, logs from fluentd would be helpful. Do you see anything when visiting the non-ops kibana?

One other question; is this a containerized install or an RPM-based install?

Comment 14 Jeff Cantrill 2016-04-14 13:11:16 UTC
Closing per https://bugzilla.redhat.com/show_bug.cgi?id=1323689#c13