Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1531096 - Prometheus fills up entire storage space
Prometheus fills up entire storage space
Status: CLOSED CURRENTRELEASE
Product: OpenShift Container Platform
Classification: Red Hat
Component: Hawkular (Show other bugs)
3.7.0
Unspecified Unspecified
high Severity high
: ---
: 3.9.z
Assigned To: Paul Gier
Junqi Zhao
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2018-01-04 09:56 EST by Rajnikant
Modified: 2018-06-18 14:19 EDT (History)
5 users (show)

See Also:
Fixed In Version: openshift v3.9.22
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-06-18 14:19:30 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2013 normal SHIPPED_LIVE Important: OpenShift Container Platform 3.9 security, bug fix, and enhancement update 2018-06-27 18:01:43 EDT

  None (edit)
Description Rajnikant 2018-01-04 09:56:56 EST
Description of problem:

Prometheus fills up entire storage space with hundreds of *.tmp files, even though the actual storage used by the time series data is around(~4GB) .

Version-Release number of selected component (if applicable):
3.7
registry.access.redhat.com/openshift3/prometheus:v3.7.14-5

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
Prometheus fills up all of it's storage space with hundreds of *.tmp files, even though the actual storage used by the time series data is around(~4GB)

Expected results:


Additional info:
Comment 3 Paul Gier 2018-01-10 14:06:41 EST
Possible workaround is to delete series with the error and then delete the .tmp directories:
https://github.com/prometheus/prometheus/issues/3487#issuecomment-347491886
Comment 4 Dennis Stritzke 2018-01-12 08:19:48 EST
I can confirm that the workaround is working. Unfortunately, the issue is happening over an over again with new series so that this is a very temporal workaround.
Comment 5 Paul Gier 2018-01-17 11:45:46 EST
Are you using a custom value for storage.tsdb.min-block-duration?  The openshift installer currently defaults to a setting of 2 minutes but we found that the default of 2h prevents some out of memory issues in some cases.  Not sure if this will also affect disk usage, but it should at least reduce the number of tsdb block directories that are created.
Comment 6 Dennis Stritzke 2018-01-17 11:50:11 EST
We are not setting the storage.tsdb.min-block-duration.

Just to be complete, here is the list of things that we are setting:
- '--storage.tsdb.retention=168h'
- '--config.file=/etc/prometheus/prometheus.yml'
- '--web.listen-address=:9090'
- '--storage.tsdb.path=/data'
- '--web.enable-admin-api'
Comment 7 Paul Gier 2018-01-25 11:55:40 EST
Prometheus 2.1.0 was released this week and contains several fixes to the tsdb.
Can you try using the upstream prom/prometheus:v2.1.0 container image to see if it resolves the storage issue?
Comment 8 Dennis Stritzke 2018-02-07 08:09:43 EST
Sorry for not keeping this issue up to date. I deployed Prometheus 2.1 upstream image in parallel to our current setup. Will have collected enough inside until Feb 13 with real usage pattern and also provoking the issue like before.
Comment 9 Dennis Stritzke 2018-02-16 03:58:45 EST
I was able to verify, that the storage issue is resolved with the 2.1 upstream image.
Comment 10 Paul Gier 2018-02-21 20:04:21 EST
Great!  We're planning to push out the 2.1.0 upgrade for openshift 3.7 and higher.
Comment 11 Paul Gier 2018-02-22 17:22:12 EST
PRs for upgrading prometheus in examples and installer:
https://github.com/openshift/origin/pull/18727
https://github.com/openshift/openshift-ansible/pull/7258
Comment 13 Paul Gier 2018-04-18 14:06:03 EDT
The master (3.10) and 3.9 branches of openshift have been updated to use prometheus 2.2.1 which should resolve this issue.
Comment 14 Junqi Zhao 2018-04-19 05:42:34 EDT
Tested with prometheus/images/v3.9.22-1,prometheus version is 2.2.1 now in prometheus 3.9 image, and passed our sanity testing

other images
prometheus-alert-buffer/images/v3.9.22-1
prometheus-alertmanager/images/v3.9.22-1
oauth-proxy/images/v3.9.22-1


# openshift version
openshift v3.9.22
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.16

Note You need to log in before you can comment on or make changes to this bug.