Hi Eric, By running "$ oc get templates --all-namespaces", found the expected templates in default namespace: default logging-es-template Template for deploying ElasticSearch with proxy/plugin for storing and retrie... 6 (1 generated) 1 default logging-fluentd-template Template for logging fluentd deployment. Currently uses terrible kludge to re... 0 (all set) 1 default logging-kibana-template Template for deploying log viewer Kibana connecting to ElasticSearch to visua... 0 (all set) 1 default logging-support-pre-template Template for deploying logging services and service accounts. 0 (all set) 9 default logging-support-template Template for deploying logging support entities: oauth, service accounts, ser... 0 (all set) 7 So the problem here is why deployer created templates in default namespace while running it inside the logging project. Please feel free to let me know if any other things you need me to provide/assist. Thanks, Xia
Set severity to high since this blocked the OSE 3.2 logging testing.
@lmeyer The reason why I gave "cluster-admin" role instead of "edit" to logging-deployer serviceaccount is https://bugzilla.redhat.com/show_bug.cgi?id=1321855, I get the error overcomed as in the last paragraph of comment #2: <----quoted comment start----> After doing "oadm policy add-role-to-user cluster-admin system:serviceaccount:logging:logging-deployer" on master machine, the logging deployer can complete successfully. <----quoted comment end----> Please let me know if this work around is not appropriate to OSE, I should reopen the bug then.
We should never be adding cluster-admin to any service account; it's a security hazard. Sorry that suggestion was made. https://bugzilla.redhat.com/show_bug.cgi?id=1321855 looks to me like much the same issue as here so I'd prefer to focus on this one. I reproduced the problem using your test environment and some different projects. The problem is that the latest 3.1 image deletes the kubeconfig it creates before running the oc commands it's intended for. I'm not sure where it's getting its kubeconfig from but perhaps it's smart enough to use the service account secret. In any case, after its kubeconfig is destroyed it defaults to the "default" project. I'll work on the fix.
Should be fixed with 3.1.1-12 (I tested and it ran fine)
Verified with deployer image 3.1.1-12, this issue has been fixed well. Thanks for your effort here, Luke.
Closing this bug as it was fixed before release.