Bug 1639739 - OpenShift Cluster monitoring Operator needs option to specify storageclass
Summary: OpenShift Cluster monitoring Operator needs option to specify storageclass
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 3.11.z
Assignee: Frederic Branczyk
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-16 13:24 UTC by Wolfgang Kulhanek
Modified: 2019-04-11 05:38 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-11 05:38:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0636 0 None None None 2019-04-11 05:38:38 UTC

Description Wolfgang Kulhanek 2018-10-16 13:24:07 UTC
Description of problem:

In 3.11.16 there is no option to define the storage class for the Monitoring Operator (prometheus) storage.

The only options available (that I could find by looking through the playbooks, forget the docs...) are:

openshift_cluster_monitoring_operator_install=True
openshift_cluster_monitoring_operator_node_selector={"node-role.kubernetes.io/infra":"true"}

openshift_cluster_monitoring_operator_prometheus_storage_capacity=20Gi
openshift_cluster_monitoring_operator_alertmanager_storage_capacity=2Gi
openshift_cluster_monitoring_operator_prometheus_storage_enabled=True
openshift_cluster_monitoring_operator_alertmanager_storage_enabled=True

With OCS this will create PVs of type gluster-fs rather than the required glusterfs-block.

In 3.10 there were options:

openshift_prometheus_storage_class='glusterfs-storage-block'
openshift_prometheus_alertmanager_storage_class='glusterfs-storage-block'
openshift_prometheus_alertbuffer_storage_volume_name=prometheus-alertbuffer

Why do we keep losing foundational settings??

Comment 1 Frederic Branczyk 2018-10-16 15:06:43 UTC
There is already a pull request out to fix this: https://github.com/openshift/openshift-ansible/pull/10387

This is an entirely new stack, it has nothing to do with the old tech preview stack.

Comment 2 Frederic Branczyk 2018-10-24 20:26:51 UTC
This has now been merged into master and the 3.11 release.

Comment 3 Junqi Zhao 2018-10-25 06:32:08 UTC
Issue is fixed.
openshift_cluster_monitoring_operator_prometheus_storage_class_name and
openshift_cluster_monitoring_operator_alertmanager_storage_class_name are added.

# rpm -qa | grep ansible
openshift-ansible-playbooks-3.11.31-1.git.0.d4b5614.el7.noarch
openshift-ansible-docs-3.11.31-1.git.0.d4b5614.el7.noarch
openshift-ansible-roles-3.11.31-1.git.0.d4b5614.el7.noarch
openshift-ansible-3.11.31-1.git.0.d4b5614.el7.noarch
ansible-2.6.6-1.el7ae.noarch

please change to ON_QA

Comment 4 Daein Park 2018-12-02 11:41:14 UTC
I've opened the PR about the related documentation here : https://github.com/openshift/openshift-docs/pull/13001

Comment 6 Frederic Branczyk 2019-01-23 15:51:47 UTC
Setting to modified as the PR was merged.

Comment 8 Junqi Zhao 2019-03-01 01:58:29 UTC
As per Comment 3, openshift_cluster_monitoring_operator_prometheus_storage_class_name and openshift_cluster_monitoring_operator_alertmanager_storage_class_name are added

The 3.11 doc is updated, no issue

Comment 10 errata-xmlrpc 2019-04-11 05:38:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0636


Note You need to log in before you can comment on or make changes to this bug.