Red Hat Bugzilla – Bug 1476713
nodeSelector lost in logging-es deploymentConfigs after update
Last modified: 2017-08-06 03:16:47 EDT
Description of problem:
After an upgrade using ansible playbooks (v3.5) the nodeSelector configuration existing in logging-es-xxxx deploymentConfigs is lost causing the customer to perform manual intervention to restore them.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Install logging with default configuration, only 1 elasticsearch node is enough
2. Edit deploymentConfig and set nodeSelector
3. Run the playbook again
The nodeSelector attribute is lost
The nodeSelector should be kept
Update your inventory file to set the node selector to user for the ES deploymentconfig: https://github.com/openshift/openshift-ansible/blob/release-1.5/roles/openshift_logging/defaults/main.yml#L92
@Jeff as far as I know, this is valid when the nodeSelector is the same for all the generated DeploymentConfigs. In this case we are trying to define a different nodeSelector for each of the DeploymentConfigs.
The purpose is to have each pod deployed on specific nodes and if no nodeSelector is provided this should not be changed.
The install/upgrade process functions more as a replace than a patch. It does not currently provide the ability to retain customized data. This is an enhancement that we'd need to take a look at and is likely to not be necessary when daemonsets are in play.
The intention is not to retain data via a fact but for ansible to make the state of logging match the intention defined in the inventory file. This means you as the deployer should be setting the node selector in the inventory. This is not a bug as a user can define the node selector for each individual logging component .
It can be provided as a straight hash value like:
or comma delimited list, hash is prefered:
As stated by @Jeff previously and during our discussion on IRC the possibility of providing a "per-dedploymentConfig" nodeSelector is not really necessary and the reason I provided can be just overcome by defining a group of nodes with the hostPath configured, independently of which DC targets which node.
From our side this BZ can be closed.
Thanks for your feedback