Bug 1476713 - nodeSelector lost in logging-es deploymentConfigs after update
Summary: nodeSelector lost in logging-es deploymentConfigs after update
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.5.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.7.0
Assignee: Jeff Cantrill
QA Contact: Xia Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-31 09:23 UTC by Ruben Romero Montes
Modified: 2017-08-06 07:16 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-06 07:16:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ruben Romero Montes 2017-07-31 09:23:55 UTC
Description of problem:
After an upgrade using ansible playbooks (v3.5) the nodeSelector configuration existing in logging-es-xxxx deploymentConfigs is lost causing the customer to perform manual intervention to restore them.

Version-Release number of selected component (if applicable):
openshift-ansible-3.5.101-1
OCP 3.5.5.31

How reproducible:
Always

Steps to Reproduce:
1. Install logging with default configuration, only 1 elasticsearch node is enough
2. Edit deploymentConfig and set nodeSelector
      nodeSelector:
        kubernetes.io/hostname: node-0.01892111.quicklab.pnq2.cee.redhat.com
3. Run the playbook again

Actual results:
The nodeSelector attribute is lost

Expected results:
The nodeSelector should be kept

Additional info:

Comment 1 Jeff Cantrill 2017-07-31 18:45:30 UTC
Update your inventory file to set the node selector to user for the ES deploymentconfig: https://github.com/openshift/openshift-ansible/blob/release-1.5/roles/openshift_logging/defaults/main.yml#L92

Comment 2 Ruben Romero Montes 2017-08-02 11:52:55 UTC
@Jeff as far as I know, this is valid when the nodeSelector is the same for all the generated DeploymentConfigs. In this case we are trying to define a different nodeSelector for each of the DeploymentConfigs.

The purpose is to have each pod deployed on specific nodes and if no nodeSelector is provided this should not be changed.

Comment 3 Paul Weil 2017-08-02 12:08:26 UTC
The install/upgrade process functions more as a replace than a patch.  It does not currently provide the ability to retain customized data.  This is an enhancement that we'd need to take a look at and is likely to not be necessary when daemonsets are in play.

Comment 4 Jeff Cantrill 2017-08-02 14:19:52 UTC
The intention is not to retain data via a fact but for ansible to make the state of logging match the intention defined in the inventory file.  This means you as the deployer should be setting the node selector in the inventory.  This is not a bug as a user can define the node selector for each individual logging component [1].  

It can be provided as a straight hash value like:

openshift_logging_es_nodeselector={"node":"infra","region":"west"}

or comma delimited list, hash is prefered:

openshift_logging_es_nodeselector=node=infra,region=west


[1 ]https://github.com/openshift/openshift-ansible/tree/release-1.5/roles/openshift_logging

Comment 5 Ruben Romero Montes 2017-08-06 07:16:47 UTC
As stated by @Jeff previously and during our discussion on IRC the possibility of providing a "per-dedploymentConfig" nodeSelector is not really necessary and the reason I provided can be just overcome by defining a group of nodes with the hostPath configured, independently of which DC targets which node.

From our side this BZ can be closed.

Thanks for your feedback


Note You need to log in before you can comment on or make changes to this bug.