Description of problem: Although it is documented that parameters for EFK deployment can be specified in a ConfigMap [1], the values defined in the configmap are ignored and only those passed as --param arguments and those defined in the logging-deployer template are used. [1] https://docs.openshift.com/container-platform/3.3/install_config/aggregate_logging.html#aggregate-logging-specifying-deployer-parameters Version-Release number of selected component (if applicable): oc v3.3.0.32 kubernetes v1.3.0+52492b4 How reproducible: Always Steps to Reproduce: 1. Create configmap oc create -n logging configmap logging-deployer --from-literal es-cluster-size=3 --from-literal es-pvc-size=1G --from-literal es-instance-ram=512M 2. create new-app oc new-app --dry-run=true -n logging logging-deployer-template --param IMAGE_VERSION=3.3.0 --param MODE=install 3. Actual results: --> Deploying template "logging-deployer-template" in project "openshift" logging-deployer-template --------- Template for running the aggregated logging deployer in a pod. Requires empowered 'logging-deployer' service account. * With parameters: * MODE=install * IMAGE_PREFIX=registry.access.redhat.com/openshift3/ * IMAGE_VERSION=3.3.0 * IMAGE_PULL_SECRET= * INSECURE_REGISTRY=false * ENABLE_OPS_CLUSTER=false * KIBANA_HOSTNAME=kibana.example.com * KIBANA_OPS_HOSTNAME=kibana-ops.example.com * PUBLIC_MASTER_URL=https://localhost:8443 * MASTER_URL=https://kubernetes.default.svc.cluster.local * ES_CLUSTER_SIZE=1 * ES_INSTANCE_RAM=8G * ES_PVC_SIZE= * ES_PVC_PREFIX=logging-es- * ES_PVC_DYNAMIC= * ES_NODE_QUORUM= * ES_RECOVER_AFTER_NODES= * ES_RECOVER_EXPECTED_NODES= * ES_RECOVER_AFTER_TIME=5m * ES_OPS_CLUSTER_SIZE= * ES_OPS_INSTANCE_RAM=8G * ES_OPS_PVC_SIZE= * ES_OPS_PVC_PREFIX=logging-es-ops- * ES_OPS_PVC_DYNAMIC= * ES_OPS_NODE_QUORUM= * ES_OPS_RECOVER_AFTER_NODES= * ES_OPS_RECOVER_EXPECTED_NODES= * ES_OPS_RECOVER_AFTER_TIME=5m * FLUENTD_NODESELECTOR=logging-infra-fluentd=true * ES_NODESELECTOR= * ES_OPS_NODESELECTOR= * KIBANA_NODESELECTOR= * KIBANA_OPS_NODESELECTOR= * CURATOR_NODESELECTOR= * CURATOR_OPS_NODESELECTOR= --> Creating resources with label app=logging-deployer-template ... pod "" created --> Success (DRY RUN) Expected results: Correct values should be used from configmap (es-cluster-size=3, es-pvc-size=1G, es-instance-ram=512M) Additional info:
This looks like the deployer pod should give priority to the ConfigMap values.
I think the confusion is coming from the fact that the output from oc new-app is the printing out the values provided by the deployer template as defaults. Within the actual deployer pod the install script gives preference to the configmap values. Is there a way to have a template pull default values from a configmap so that oc new-app with dry-run would print out the anticipated values?
Eric, currently it's not possible. I do see how this would be helpful though. Copying Ben to get his thoughts.
I want to make sure we understand the issue here. Is the problem that the configmap values are not being used? If so, as Cesar noted, that is a problem w/ the deployer pod not giving priority to the configmap values. or Is the problem that it's confusing for new-app to print out the template parameter values when they may not be relevant (because values from a configmap are overriding them)? If so, there's not much we can do (we have no control over how template parameter values are/aren't used by the things the template defines) I agree that there is potential for an RFE to allow new-app to take a configmap name from which it extracts template parameter values and that would avoid the confusion, assuming users knew to pass that configmap when instantiating this particular template, but before we accept that RFE, I want to understand the issue we hit in this case, because while it's a valid RFE, i don't know that i'd prioritize it very highly, especially now that template parameter values can be supplied via a file.
The issue is with the new-app output. I think the confusion is that the new-app output does reflect the values passed to it as --param arguments but then does not reflect the values defined in the configmap, although they are used correctly in the end. The issue would then be the inconsistency and the suggestion would be to have the new-app output reflect all values that have been defined by the user by configmap, new-app arguments or elsewhere. For example: oc create configmap logging-deployer --from-literal kibana-hostname=ignorethis.lab --from-literal public-master-url=https://master.lab:8443 --from-literal es-cluster-size=1 --from-literal es-instance-ram=512M configmap "logging-deployer" created oc new-app logging-deployer-template --param IMAGE_VERSION=3.3.0 --param MODE=install --> Deploying template "logging-deployer-template" in project "openshift" logging-deployer-template --------- Template for running the aggregated logging deployer in a pod. Requires empowered 'logging-deployer' service account. * With parameters: * MODE=install * IMAGE_PREFIX=registry.access.redhat.com/openshift3/ * IMAGE_VERSION=3.3.0 * IMAGE_PULL_SECRET= * INSECURE_REGISTRY=false * ENABLE_OPS_CLUSTER=false * KIBANA_HOSTNAME=kibana.example.com * KIBANA_OPS_HOSTNAME=kibana-ops.example.com * PUBLIC_MASTER_URL=https://localhost:8443 * MASTER_URL=https://kubernetes.default.svc.cluster.local * ES_CLUSTER_SIZE=1 * ES_INSTANCE_RAM=8G ... As expected, IMAGE_VERSION is shown as 3.3.0 due to being set with --param IMAGE_VERSION=3.3.0 It would also be expected that KIBANA_HOSTNAME, PUBLIC_MASTER_URL, ES_INSTANCE_RAM would match the values given in the configmap but they show the values provided by the template. But from the logs you can see that the correct values defined in the configmap are actually used by the deployer. oc logs logging-deployer-jlvr9 ... + hostname=ignorethis.lab + public_master_url=https://master.lab:8443 + es_instance_ram=512M ...
Yeah, so that's really not possible. There's no way for new-app to know that it should look for a particular configmap in your project or that particular values provided in that configmap are going to be used by the application the template is deploying instead of template parameter values. The logging-deployer template isn't even creating the configmap, so new-app has no way to know that: 1) some configmap exists in your project 2) the keys from that configmap are relevant to the pods this template is going to create 3) the keys from that configmap supercede particular parameters defined in the template The only thing that can be done here is possibly better doc (either in the instructions, or in the template's description/message) that make it clear that the parameter values may be superceded by configmap values if you've defined a configmap named FOO with keys named BAR. That being the case, Eric is still the right owner since integration services owns the template in question.
The documentation changes look good to me. Set to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0289