Bug 1619293 - [free-int] installer should error if non-openshift-* namespace is configured for logging and priorityClass is going to be set
Summary: [free-int] installer should error if non-openshift-* namespace is configured ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.11.0
Assignee: ewolinet
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-20 14:23 UTC by Justin Pierce
Modified: 2018-10-11 07:25 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: If logging was not in a namespace that began with 'openshift-' Fluentd was not able to use the "system-cluster-critical" priority class. Consequence: Fluentd would not be able to start up. Fix: We create a priority class for Cluster Logging and configure Fluentd to use that instead. Result: Fluentd is able to start up, even if not installed to an 'openshift-*' namespace.
Clone Of:
Environment:
Last Closed: 2018-10-11 07:25:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2652 0 None None None 2018-10-11 07:25:43 UTC

Description Justin Pierce 2018-08-20 14:23:38 UTC
Description of problem:
In v3.11.0-0.16.0, it is possible for a standard upgrade of logging to fail using parameters that would have otherwise worked in 3.10. 

After running an upgrade, fluentd pods will not launch due to the error:
creating: pods "logging-fluentd-" is forbidden: pods with system-cluster-critical priorityClass is not permitted in logging namespace. This is because the fluentd pods have "priorityClassName: system-cluster-critical" set, but do not exist in a privileged openshift-* namespace.  


Version-Release number of selected component (if applicable):
v3.11.0-0.16.0

How reproducible:
100%

Steps to Reproduce:
1. Configure logging to target the 'logging' namespace (anything other than openshift-*)
2. Install logging
3. Observe that fluentd will not start due to this feature of k8s: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/  -> https://bugzilla.redhat.com/show_bug.cgi?id=1616171#c5

Actual results:
Fluentd pods fail to start. Logging will not function. 

Expected results:
The installer should fail if settings incompatible are selected for logging. Valid options:
1) openshift-* namespace configured and priorityClassName enabled.
2) non-openshift-* namespace configured, but priorityClassName disabled. 

If the installer is otherwise configured, it should error before it unintentionally cripples logging. 

Additional info:
If the user can prevent priorityClassName from being set by a setting in the inventory, documentation on the impact of this should be provided. 

Description of problem:

Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
ansible --version

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:
Please include the entire output from the last TASK line through the end of output if an error is generated

Expected results:

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 1 ewolinet 2018-08-20 22:04:18 UTC
Instead of failing with an error, we will be creating a priority class for fluentd to use instead.

https://github.com/openshift/openshift-ansible/pull/9686

Comment 3 Anping Li 2018-09-10 08:27:57 UTC
The bug have been fix in v3.11.0-0.28.0.0

Comment 5 errata-xmlrpc 2018-10-11 07:25:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2652


Note You need to log in before you can comment on or make changes to this bug.