Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1810982 - Add additional info to edit the namespace with node selector
Summary: Add additional info to edit the namespace with node selector
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.5.0
Assignee: Periklis Tsirakidis
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-06 10:49 UTC by Maciej Szulik
Modified: 2020-04-23 07:45 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1810841
Environment:
Last Closed: 2020-04-23 07:45:26 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Maciej Szulik 2020-03-06 10:49:49 UTC
+++ This bug was initially created as a clone of Bug #1810841 +++

Description of problem:


This is our official documentation for placing infra workloads on infra nodes: 
https://docs.openshift.com/container-platform/4.3/machine_management/creating-infrastructure-machinesets.html#infrastructure-moving-monitoring_creating-infrastructure-machinesets

Its instructions won't work in many cases, because they don't assume that the cluster's default scheduler scheduler/cluster can be set with anything else other than the default worker.

For example: 

spec:
  defaultNodeSelector: node-role.kubernetes.io/application=

Assuming it's unset is unrealistic. This is the first thing a customer who wishes to avoid apps starting on infra nodes will set after they build their infra nodes.

Every infra component then needs to override this setting - by adding  (openshift.io/node-selector: "") to its namespace. 

Most infra components already have this automatically done by the operator (openshift-monitoring, openshift-ingress-opertor,  openshift-image-registry ). Two do not - openshift-ingress and openshift-logging.

Can we have this done by the Operator itself, i.e, adding a blank override? 

Or can this just be done by a documentation bug itself? By adding the following workaround.
 
For the ingress router section, please add a blank override (openshift.io/node-selector: "") to the openshift-ingress namespace to look like this:
$ oc get ns/openshift-ingress -o yaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/node-selector: ""

For the logging section, please add a blank override (openshift.io/node-selector: "") to the openshift-logging namespace to look like this:
$ oc get ns openshift-logging -o yaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/node-selector: ""

Version-Release number of selected component (if applicable):

OCP 4.3

How reproducible:


Steps to Reproduce:
1. Scheduling Any components other than app workloads to infra nodes.
2. The default instructions in the documentation does not work in all scenarios.


Actual results:

Pods to be scheduled as per the requirement.

Expected results:

Corresponding Operator should be able to handle the task of having a label at the namespacelevel


Additional info:

Please suggest if just a documentation change is enough or if operator level change is needed as well.

Comment 1 Maciej Szulik 2020-03-06 10:51:55 UTC
Moving this one for logging team and I left the other 1810841 for the networking team.

Comment 2 Periklis Tsirakidis 2020-04-17 16:05:01 UTC
(In reply to Maciej Szulik from comment #1)
> Moving this one for logging team and I left the other 1810841 for the
> networking team.

Since cluster-logging-operator is a OLM operator, we already instruct users to create the openshift-logging namespace with the empty node-selector annotation. This is done per doc entry [1], [2].

@Maciej Szulik: Are you fine with this approach?

[1] https://docs.openshift.com/container-platform/4.3/logging/cluster-logging-deploying.html#cluster-logging-deploy-clo_cluster-logging-deploying
[2] https://docs.openshift.com/container-platform/4.3/logging/cluster-logging-deploying.html#cluster-logging-deploy-eo-cli_cluster-logging-deploying

Comment 3 Maciej Szulik 2020-04-20 09:16:18 UTC
I will defer to logging team to confirm the approach.

Comment 4 Periklis Tsirakidis 2020-04-23 07:45:26 UTC
@jcantrill

Since we address this issue already with docs I will close it.


Note You need to log in before you can comment on or make changes to this bug.