Bug 1436635
Summary: | mongodb claims 3.4G disk space in /var | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Yolanda Robla <yroblamo> |
Component: | openstack-tripleo | Assignee: | James Slagle <jslagle> |
Status: | CLOSED NOTABUG | QA Contact: | Arik Chernetsky <achernet> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 12.0 (Pike) | CC: | aschultz, jdanjou, mburns, michele, rhel-osp-director-maint, royoung |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-06-22 09:25:15 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1355903 |
Description
Yolanda Robla
2017-03-28 11:06:04 UTC
I am no mongo expert, so take this with a pinch of salt. The option we talk about seems to be described here: https://docs.mongodb.com/manual/reference/configuration-options/#storage.mmapv1.smallFiles """ When true, MongoDB uses a smaller default file size. The storage.mmapv1.smallFiles option reduces the initial size for data files and limits the maximum size to 512 megabytes. storage.mmapv1.smallFiles also reduces the size of each journal file from 1 gigabyte to 128 megabytes. Use storage.mmapv1.smallFiles if you have a large number of databases that each holds a small quantity of data. The storage.mmapv1.smallFiles option can lead the mongod instance to create a large number of files, which can affect performance for larger databases. The storage.mmapv1.smallFiles setting is available only for mongod. """ So it seems to make sense only if we use many small databases as opposed to a few larger ones. Since we only have a very few dbs: tripleo:PRIMARY> show databases admin (empty) ceilometer 0.078GB local 48.055GB The tradeoffs for this options seem to be not in favour of performance here So we better allocate enough disk space for the large journal when building the image? That would be my take yes. Although I'd like we take a bit of higher-level approach on this and first do an estimation of how much data we need in /var in general. I mean we also store galera data there when galera and mongo exist on the same role, so a small /var would affect an environment quite a bit, no? Can we maybe take a step back and outline the reasons for your efforts here just to clarify the context a bit more? That's the blueprint i'm working on , and that explains the needs for different volumes: https://blueprints.launchpad.net/tripleo/+spec/build-whole-disk-images |