This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1476713 - nodeSelector lost in logging-es deploymentConfigs after update
nodeSelector lost in logging-es deploymentConfigs after update
Status: CLOSED NOTABUG
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging (Show other bugs)
3.5.1
Unspecified Unspecified
medium Severity medium
: ---
: 3.7.0
Assigned To: Jeff Cantrill
Xia Zhao
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-31 05:23 EDT by Ruben Romero Montes
Modified: 2017-08-06 03:16 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-06 03:16:47 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ruben Romero Montes 2017-07-31 05:23:55 EDT
Description of problem:
After an upgrade using ansible playbooks (v3.5) the nodeSelector configuration existing in logging-es-xxxx deploymentConfigs is lost causing the customer to perform manual intervention to restore them.

Version-Release number of selected component (if applicable):
openshift-ansible-3.5.101-1
OCP 3.5.5.31

How reproducible:
Always

Steps to Reproduce:
1. Install logging with default configuration, only 1 elasticsearch node is enough
2. Edit deploymentConfig and set nodeSelector
      nodeSelector:
        kubernetes.io/hostname: node-0.01892111.quicklab.pnq2.cee.redhat.com
3. Run the playbook again

Actual results:
The nodeSelector attribute is lost

Expected results:
The nodeSelector should be kept

Additional info:
Comment 1 Jeff Cantrill 2017-07-31 14:45:30 EDT
Update your inventory file to set the node selector to user for the ES deploymentconfig: https://github.com/openshift/openshift-ansible/blob/release-1.5/roles/openshift_logging/defaults/main.yml#L92
Comment 2 Ruben Romero Montes 2017-08-02 07:52:55 EDT
@Jeff as far as I know, this is valid when the nodeSelector is the same for all the generated DeploymentConfigs. In this case we are trying to define a different nodeSelector for each of the DeploymentConfigs.

The purpose is to have each pod deployed on specific nodes and if no nodeSelector is provided this should not be changed.
Comment 3 Paul Weil 2017-08-02 08:08:26 EDT
The install/upgrade process functions more as a replace than a patch.  It does not currently provide the ability to retain customized data.  This is an enhancement that we'd need to take a look at and is likely to not be necessary when daemonsets are in play.
Comment 4 Jeff Cantrill 2017-08-02 10:19:52 EDT
The intention is not to retain data via a fact but for ansible to make the state of logging match the intention defined in the inventory file.  This means you as the deployer should be setting the node selector in the inventory.  This is not a bug as a user can define the node selector for each individual logging component [1].  

It can be provided as a straight hash value like:

openshift_logging_es_nodeselector={"node":"infra","region":"west"}

or comma delimited list, hash is prefered:

openshift_logging_es_nodeselector=node=infra,region=west


[1 ]https://github.com/openshift/openshift-ansible/tree/release-1.5/roles/openshift_logging
Comment 5 Ruben Romero Montes 2017-08-06 03:16:47 EDT
As stated by @Jeff previously and during our discussion on IRC the possibility of providing a "per-dedploymentConfig" nodeSelector is not really necessary and the reason I provided can be just overcome by defining a group of nodes with the hostPath configured, independently of which DC targets which node.

From our side this BZ can be closed.

Thanks for your feedback

Note You need to log in before you can comment on or make changes to this bug.