Bug 1645358 - [DOCS] Need storage support policy for monitoring stack (prometheus)
Summary: [DOCS] Need storage support policy for monitoring stack (prometheus)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Jason Boxman
QA Contact: Junqi Zhao
Vikram Goyal
URL:
Whiteboard:
: 1658563 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-02 03:24 UTC by Kenjiro Nakayama
Modified: 2023-03-24 14:20 UTC (History)
28 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-19 22:08:46 UTC
Target Upstream Version:
jboxman: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3680111 0 None None None 2018-11-07 07:17:37 UTC
Red Hat Knowledge Base (Solution) 3760121 0 None None None 2018-12-18 01:21:28 UTC

Description Kenjiro Nakayama 2018-11-02 03:24:19 UTC
Document URL:

- https://docs.openshift.com/container-platform/3.11/scaling_performance/optimizing_storage.html
  or
- https://docs.openshift.com/container-platform/3.11/scaling_performance/scaling_cluster_monitoring.html

Section Number and Name:

N/A (we are requesting the new doc section)

Describe the issue: 

- We would like you to add storage recommendation to Monitoring Stack - here https://docs.openshift.com/container-platform/3.11/scaling_performance/optimizing_storage.html
- There is already one section https://docs.openshift.com/container-platform/3.11/scaling_performance/scaling_cluster_monitoring.html, but we cannot find the support policy when we use block storage, NFS and so on.

Suggestions for improvement:

- Please write the storage support policy docs for monitoring stack as Block/File/Object are supported or not.

Additional information: 

- Upstream said https://github.com/prometheus/prometheus/issues/3534#issuecomment-348598966 . We also need to know why RH can support GlusterFS(non-POSIX).

Comment 2 Christian Heidenreich 2018-11-02 09:51:33 UTC
Currently, only block storage is supported. NFS isn't, because it is not supported by Prometheus. Historically, the Prometheus team had a lot of problems with inconsistency behaviour by NFS. At least that's my understanding.

> We also need to know why RH can support GlusterFS(non-POSIX).

Do you mean "why we can" or "why we cannot"?

Comment 3 Kenjiro Nakayama 2018-11-02 09:57:27 UTC
> Do you mean "why we can" or "why we cannot"?

I mean "why we can". Please refer to following docs:

https://docs.openshift.com/container-platform/3.11/scaling_performance/scaling_cluster_monitoring.html
  Recommendations for OpenShift Container Platform
  * Use GlusterFS as storage on top of NVMe drives.

RH recommends GlusterFS, but GlusterFS should have the same problem with NFS since it is not POSIX compliant.

Comment 4 Kenjiro Nakayama 2018-12-06 07:50:32 UTC
@Christian I was asked by another customer if glusterfs is available for prometheus and alertmanager. Could you please confirm if we really can say that GlsuterFS is available for Prometheus?

Comment 5 Christian Heidenreich 2018-12-06 08:19:43 UTC
I am requesting additional information from Tushar. The Scaling team came up with the GlusterFS recommendation.

As I said before, only block storage is officially supported for Prometheus. Tushar could probably shed some light on GlusterFS.

Comment 6 Kenjiro Nakayama 2018-12-09 23:54:08 UTC
@Tushar, can we get your update?

Comment 7 Kenjiro Nakayama 2018-12-11 05:58:29 UTC
bump @tkatarki -  if recommended storage (GlusterFS) is not supported actually, this would be a big impact for some customers.

Comment 20 Christian Heidenreich 2019-02-06 09:51:01 UTC
I am updating this ticket with what have been discussed in offline discussions across different threads. 

Tushar (Storage PM) recommends `hostPath` as a persistence volume in general. I know that `hostPath` is a critical discussed option, but it has been supported in OpenShift for a long time and it is proved to work with other components, e.g. ElasticSearch, already. At least, that's the recommended/supported option as of today. 

That said, this statement can, and will, definitely change in the future. First, the PM team needs to understand what the demand looks like. What does your customer want to use. We can take that as a base and make sure to set in place a process to proper test different options and discuss with support team to what's possible for them to support. 

Therefore, please let us know what storage options should be supported in the future, based on your customer's demands. We will handle the rest.

Comment 22 Christian Heidenreich 2019-02-06 12:38:57 UTC
Adding an additional statement from another thread:

OpenShift does indeed support different storage plugins[1], but not all are posix-complained and have currently been validated with the new cluster monitoring stack in 3.11. We would need to do that and also give the support team a chance to jump on that train, before we can make a sound decision and raise customers expectations. With the current level of information, and what you might have read in this BZ ticket, Tushar recommends using `HostPath` with the cluster-monitoring stack that is running on infra nodes in OpenShift. Any additional options need some verification first. I am working with Tushar at the moment on that.  

[1] https://docs.openshift.com/container-platform/3.11/architecture/additional_concepts/storage.html#types-of-persistent-volumes

Comment 25 Christian Heidenreich 2019-02-11 09:02:22 UTC
Update:

As we understand the current concerns, we are working with different groups to compile a list of supported storage options for Prometheus in 3.11. We hope to update this ticket again by the end of this week.

Comment 42 Junqi Zhao 2019-02-14 00:48:26 UTC
(In reply to Kenjiro Nakayama from comment #3)
> > Do you mean "why we can" or "why we cannot"?
> 
> I mean "why we can". Please refer to following docs:
> 
> https://docs.openshift.com/container-platform/3.11/scaling_performance/
> scaling_cluster_monitoring.html
>   Recommendations for OpenShift Container Platform
>   * Use GlusterFS as storage on top of NVMe drives.
> 
> RH recommends GlusterFS, but GlusterFS should have the same problem with NFS
> since it is not POSIX compliant.

Since monitoring only support block storage, GlusterFS is file storage, monitoring does not support it, there is a doc error, we should change the doc

WDYT?
@Christian

Comment 43 Christian Heidenreich 2019-02-14 07:58:46 UTC
gluster-fs does actually provide a posix complaint filesystem[1]. I have asked Xiaoli to investigate it as a potential option.

Nevertheless, the docs will need some update after we have a final list signed off by Support, PM, Eng, QE, and Docs. 

[1] https://redhatstorage.redhat.com/products/glusterfs/

Comment 44 Kenjiro Nakayama 2019-02-14 08:09:40 UTC
(In reply to Christian Vogel from comment #43)
> gluster-fs does actually provide a posix complaint filesystem[1]. I have
> asked Xiaoli to investigate it as a potential option.
> 
> [1] https://redhatstorage.redhat.com/products/glusterfs/

(internal) http://post-office.corp.redhat.com/archives/sme-storage/2019-February/msg00125.html
(external) https://bugzilla.redhat.com/show_bug.cgi?id=1464315

Comment 45 Junqi Zhao 2019-02-14 08:16:59 UTC
(In reply to Christian Vogel from comment #43)
> gluster-fs does actually provide a posix complaint filesystem[1]. I have
> asked Xiaoli to investigate it as a potential option.
> 
> Nevertheless, the docs will need some update after we have a final list
> signed off by Support, PM, Eng, QE, and Docs. 
> 
> [1] https://redhatstorage.redhat.com/products/glusterfs/

ok, tested today, gulster fs storage could be used for cluster monitoring, cluster monitoring works well

Comment 48 Kenjiro Nakayama 2019-02-14 08:27:22 UTC
Well, I am OK to support GlusterFS. But if you say GluterFS (RH storage-sme says posix "Compatible" filesystem) is supported, we will get many questions like "NFS provided by xxx is not supported? It is posix compatible. Can you support it? etc...".

Comment 49 Junqi Zhao 2019-02-14 08:36:39 UTC
(In reply to Kenjiro Nakayama from comment #48)
> Well, I am OK to support GlusterFS. But if you say GluterFS (RH storage-sme
> says posix "Compatible" filesystem) is supported, we will get many questions
> like "NFS provided by xxx is not supported? It is posix compatible. Can you
> support it? etc...".

support does not mean it is recommended to use, such as NFS, we could use it, but it has a lot of problems with inconsistency

Comment 50 Christian Heidenreich 2019-02-14 10:41:23 UTC
As Junqi said, we have seen many inconsistencies with different implementations including data corruption without any way to recover (even with posix complaint implementations). This is why the Prometheus maintainers do not recommend to use NFS. We should not go this route as we have many other good alternatives. Again, this is a Red Hat statement. If a partner who provides an implementation for NFS feels comfortable to support it themselves, that's ok. It's up to the partner, but there isn't any support coming from Red Hat.

Comment 51 Kenjiro Nakayama 2019-02-14 11:29:10 UTC
OK, so posix-complaint is not related to supported or unsupported actually. (The support policy was changed many times in this bz - block storage only, needs posix compliant, hostPath only etc..)

Comment 52 Christian Heidenreich 2019-02-14 11:43:04 UTC
A posix complaint filesystem is a requirement from Prometheus. Now different storage plugins have different implementations and it's hard to generalize all in a single statement. Apologies if on our journey there has been different statements but we tried to take your, and everyone else's, feedback and push harder for more details on what we can do additionally to our initial statements. Reaching out to different groups in the support cycle, uncovered more details and our list grew bit by bit. We are still working with the BU to understand what else has been tested or we could test by the end of this week. That will conclude our supported list and we can update the docs accordingly. Please stay tuned for more updates.

Comment 53 Tsai Li Ming 2019-02-18 08:09:34 UTC
Do we have an update on this?

Comment 54 Vikram Goyal 2019-02-18 08:22:14 UTC
(In reply to Tsai Li Ming from comment #53)
> Do we have an update on this?

Hi - yes, a decision has been taken and Christian will be updating the BZ shortly.

Comment 56 Christian Heidenreich 2019-02-20 08:55:56 UTC
After a week digging into this matter to find out what the general recommendation for storage options are, not only for Prometheus, I finally am able to give you an update.

TL;DR

Architecturally, at the highest level, the best situation is object on registry and block on metrics (via Prometheus) and logging.  And we think you will find these are pretty industry wide recommendations for those underlying technologies (even outside of OpenShift). Therefore, we recommend the use of any block level storage volumes with Prometheus for any production environments.

Longer answer:

At the highest level, we recommend block storage with Prometheus for any production environments. Where it breaks down is in the datacenter where there are hundreds of storage solutions to select from at the protocol level (iSCSI, NFS, FC) and then tens of solutions at the software level (Portworx, pureStorage, 3par, StorageOS, ScaleIO) that all work with OpenShift.  OpenShift is not going to qualify itself against the hundreds of combinations. Therefore, we added a clause to our documentation that, should you want to leverage something other than our recommendation, please contact the specific vendor in question and ask them for more information regarding their support for OpenShift. You will find that in a few places in our product documentation.

And yes, there are file based storage plugins that are probably even better than block. Some appliances build around those plugins have SSD drives on the backend, they use L3 RAM cache as a frontend bucket on incoming writes, they are 40GB connected to the servers, etc, etc. Our customers should be able to use them if the vendor agrees that they will support the OpenShift workload. But that is their choice and we have given them that choice. That is a conversation you need to have with them. If you are going to ask me (Red Hat) what Red Hat storage products we recommend, it would be the OCS with block emulation.

As for how a support case would go down. In all cases we troubleshoot the case in order to find out what the problem is. If we determine it to be because of the storage backend, we would direct the customer to the storage solution provider if it is against our recommendations.

Furthermore, some of you have pointed out that the scalability docs actually recommend GlusterFS (OCS File). Unfortunately, that was a mistake and should have been OCS Block (gluster-block) in the first place.

As for what happens next. We will update our documentation at multiple places to make the above statement more clear. The following is a list of todos on our team:
- Update the scalability guidelines docs[1]
- Remove: "Use GlusterFS as storage on top of NVMe drives." and add explanation on the use of OCS Block
- Add OCS Block to the supported PV plugin list[2]
- Update the “Storage Recommendation” guide[3]
- Add link from Monitoring docs[4] to the “Storage Recommendation” guide

If you see anything else, please let me know. I coordinate that with the docs team.

That said, we also agreed to start planning on adding more details to the components individual scalability guides that helps to draw a quantifiable line in the sand and say "You need to have at least this many IOPs (or whatever metric)" for, for example, Prometheus. Something our customers can then take to their storage vendors and say "give us something THIS fast". This should give storage vendors the ability to decide, if they are able to support their implementation for OpenShift framework components or not. 

[1] https://docs.openshift.com/container-platform/3.11/scaling_performance/scaling_cluster_monitoring.html
[2] https://docs.openshift.com/container-platform/3.11/architecture/additional_concepts/storage.html#types-of-persistent-volumes
[3] https://docs.openshift.com/container-platform/3.11/scaling_performance/optimizing_storage.html#back-end-recommendations
[4] https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#persistent-storage

Comment 57 Jason Boxman 2019-02-20 21:45:13 UTC
Thanks for opening this BZ.

@Christian, thanks for summarizing the issue and assembling a list of changes. I’ve created a draft PR[0] as an initial pass at addressing the issues raised in this BZ.

As I’m relatively new to OpenShift, I have several questions:

- Is OCS now the preferred name for GlusterFS?
- Is OCS Block a specific storage type available using OCS?
- For 'Remove: "Use GlusterFS as storage on top of NVMe drives." and add explanation on the use of OCS Block', what is the explanation on the use of OCS Block?
- For [3], does the table need to be updated to specify file storage as “Not recommended” for logging?

Thanks!

[0] https://github.com/openshift/openshift-docs/pull/13701

Comment 58 Christian Heidenreich 2019-02-21 15:50:00 UTC
Hi Jason.

Let me go through your question on by one.

> Is OCS now the preferred name for GlusterFS?

I don't think it is the new preferred name for GlusterFS. I believe GlusterFS is a sub-set of the OpenShift Container Storage. IT's more an abstraction if you like. But I am not the right person to give you a clear answer for this :D 

> Is OCS Block a specific storage type available using OCS?

Yes. There is OCS Block, OCS File, and OCS S3 available afaik.

> For 'Remove: "Use GlusterFS as storage on top of NVMe drives." and add explanation on the use of OCS Block', what is the explanation on the use of OCS Block?

I think it's enough to mention somewhere that for the scalability tests, the team used OCS Block.

> For [3], does the table need to be updated to specify file storage as “Not recommended” for logging?

You mean for Metrics I guess. No, we haven't really used "not recommended" in that table. File is still "Configurable" even if we don't recommend it. See my statement in the last document for more info on what that means.

Comment 64 Jason Boxman 2019-03-07 20:29:29 UTC
I've updated the PR[0] with the proposed changes.

[0] https://github.com/openshift/openshift-docs/pull/13701

Comment 65 Paul Dwyer 2019-03-11 10:48:35 UTC
*** Bug 1658563 has been marked as a duplicate of this bug. ***

Comment 73 Jason Boxman 2019-07-22 20:20:28 UTC
I apologize for the delay; I was working on our content for the OCP 4.1 release.

I merged the PR[0] for these changes.

Thanks.

[0] https://github.com/openshift/openshift-docs/pull/15262


Note You need to log in before you can comment on or make changes to this bug.