Bug 1326565 - logging-support-template is not automatically processed by deployer
Summary: logging-support-template is not automatically processed by deployer
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OKD
Classification: Red Hat
Component: Logging
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Luke Meyer
QA Contact: chunchen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-13 05:32 UTC by Xia Zhao
Modified: 2016-09-30 02:17 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-19 13:50:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
deployer_pod_log (26.56 KB, text/plain)
2016-04-13 05:32 UTC, Xia Zhao
no flags Details

Description Xia Zhao 2016-04-13 05:32:32 UTC
Created attachment 1146698 [details]
deployer_pod_log

Problem description: 
Deploy logging with ENABLE_OPS_CLUSTER=true, the logging-support-template is not processed by deployer. The workaround is to maually run "oc process logging-support-template | oc create -f -".

Version-Release number of selected component (if applicable):
openshift/origin-logging-curator    07e013fb74a4
openshift/origin-logging-fluentd    f841fe531e98
openshift/origin-logging-elasticsearch    80ccbf3e9509
openshift/origin-logging-deployment    ff9e087ade6e
openshift/origin-logging-kibana    eda6efd4df85
openshift/origin-logging-auth-proxy    2f0fc5db512e

How reproducible:
Always

Steps to Reproduce:
1.Create the logging project:
# oc login <openshift-master>
# oc new-project logging

2.Make sure the logging deployer template exist on your openshift:
# oc get template -n openshift | grep logging
logging-deployer-account-template   Template for creating the deployer account and roles needed for the aggregate...   0 (all set)     3
logging-deployer-template           Template for running the aggregated logging deployer in a pod. Requires empow...   27 (16 blank)   1

3.Create an empty Deployer Secret:
#oc secrets new logging-deployer nothing=/dev/null

4.Create Supporting ServiceAccounts:
Run on master:
# oadm policy add-cluster-role-to-user cluster-admin <oc-client-user>         
# oadm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd
# oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:logging:aggregated-logging-fluentd
# oc label node --all logging-infra-fluentd=true                      

Run on oc client in project logging:
# oc process -n openshift logging-deployer-account-template | oc create -f - 
# oc policy add-role-to-user edit --serviceaccount logging-deployer
# oc policy add-role-to-user daemonset-admin --serviceaccount logging-deployer
# oadm policy add-cluster-role-to-user oauth-editor system:serviceaccount:logging:logging-deployer

5. Run the Deployer with OPS cluster deployed:
oc process -n openshift logging-deployer-template -v ENABLE_OPS_CLUSTER=true,\
                                                                                     IMAGE_PREFIX=openshift/origin-,\
                                                                                     KIBANA_HOSTNAME={kibana.router},\
                                                                                     KIBANA_OPS_HOSTNAME={kibana-ops.router},\
                                                                                     PUBLIC_MASTER_URL=https://{master-domain}:8443,\
                                                                                     ES_INSTANCE_RAM=1024M,\
                                                                                     ES_CLUSTER_SIZE=1,\
                                                                                     IMAGE_VERSION=latest,\
                                                                                     MASTER_URL=https://{master-domain}:8443\
                                                                                     |oc create -f -

6. Wait for a while, check whether the CEFK pods are running

Actual Result:
The curator/es/kibana pods are not running
$ oc get po
NAME                     READY     STATUS      RESTARTS   AGE
logging-deployer-5sy0l   0/1       Completed   0          1h
logging-fluentd-f3tyg    1/1       Running     0          1h

Expected Result:
All the CEFK pods should running

Additional info:
1.Issue did not repro when ENABLE_OPS_CLUSTER is set to false
2.Deployer log attached

Comment 1 ewolinet 2016-04-13 16:35:42 UTC
Can you please provide the logs for the deployer when ENABLE_OPS_CLUSTER is false?

Comment 2 Xia Zhao 2016-04-14 03:40:01 UTC
I found it reproducible when ENABLE_OPS_CLUSTER is false. Bug title changed. I'm not sure if this is a doc issue for https://github.com/openshift/origin-aggregated-logging/tree/master/deployment?

Comment 3 Luke Meyer 2016-04-14 12:20:45 UTC
Have we pushed out a deployer image with the latest updates? The change to have all user steps up front and the deployer finish with a working deployment is fairly recent...

Guess I can just pull the deployer from dockerhub and see.

Comment 4 Luke Meyer 2016-04-14 13:23:14 UTC
It's definitely an old image. The install.sh script hasn't been extracted from the run.sh script yet.

Comment 5 Luke Meyer 2016-04-14 14:10:40 UTC
New Origin deployer image has been pushed.

Comment 6 Xia Zhao 2016-04-18 05:57:17 UTC
Verified with the latest deployer image, issue fixed well, get cefk pods running.
Images tested:
openshift/origin-logging-deployment    b89dba477f35
openshift/origin-logging-curator    07e013fb74a4
openshift/origin-logging-fluentd    f841fe531e98
openshift/origin-logging-elasticsearch    80ccbf3e9509
openshift/origin-logging-kibana    eda6efd4df85
openshift/origin-logging-auth-proxy    2f0fc5db512e


Note You need to log in before you can comment on or make changes to this bug.