Bug 1366323 - Decrease the disk requirements of /var/lib/qpidd by decreasing efp-file-size
Summary: Decrease the disk requirements of /var/lib/qpidd by decreasing efp-file-size
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Installer
Version: 6.2.0
Hardware: x86_64
OS: Linux
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact: Katello QA List
Depends On:
TreeView+ depends on / blocked
Reported: 2016-08-11 15:27 UTC by Pavel Moravec
Modified: 2019-08-12 16:20 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-09-04 18:06:14 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 16955 0 Normal New Decrease the disk requirements of /var/lib/qpidd by decreasing efp-file-size 2020-02-20 21:18:15 UTC
Red Hat Bugzilla 1366517 0 medium CLOSED Mention storage requirements for /var/lib/qpidd 2021-02-22 00:41:40 UTC

Internal Links: 1366517

Description Pavel Moravec 2016-08-11 15:27:57 UTC
Description of problem:
Exec summary: /var/lib/qpidd requires 2MB disk space per a content host. That redundantly consumes disk when scaling up number of Content Hosts.

Detailed description:
Satellite6 is deployed with default value of efp-file-size=2048 (kB) for qpid broker. That means, a journal file for a durable qpid queue will take 2MB on the disk.

Every Content Host means one durable queue pulp.agent.<UUID> is created. So for every Content Host, /var/lib/qpidd grows up by 2MB in disk space. That is 20GB of disk space for 10k Content Hosts.

Such value is significant enough to be considered when creating disk layout / partitions. Or something we can optimize / decrease substantially by decreasing efp-file-size parameter.

Decreasing the parameter has the only side effect: the journal files would generally get full/obsolete/recycled/rotated faster. But these events occur very seldom, since Satellite/pulp sends to qpidd relatively small number of small/average sized messages only (a message usually has around 1kB, one repo sync means sending less than 100 such messages across more queues).

I see several options here:
1) Add to documentation mention about /var/lib/qpidd disk usage wrt. number of Content Hosts
2) Decrease the parameter to say 256k (8times) just for new deployments (to avoid headache to convert big journals to smallers during upgrade)
3) Decrease the parameter for new deployments and also for upgrades - for this option, there would have to be a tool for converting journals from one journal size to another (not sure if exists but shall be simple to write it)

I dont have strong opinion which option is the best one, as well as I dont know what would be ideal efp-file-size value (that should come up from some testing / measuring, I guess).

Version-Release number of selected component (if applicable):
Sat 6.2 GA (applies to any Sat6 version, though)

How reproducible:

Steps to Reproduce:
1. Deploy Sat6
2. Register 10k Content Hosts (to the Sat or to an external Caps, doesnt matter)
3. du -k /var/lib/qpidd

Actual results:
3. shows over 20GB diskspace consumed

Expected results:
3. to show considerably smaller value, or documentation mentions this disk space requirement

Additional info:

Comment 2 Pavel Moravec 2016-08-12 08:02:20 UTC
Doc BZ raised as https://bugzilla.redhat.com/show_bug.cgi?id=1366517 since either way we chose, the disk requirement is worth to be mentioned in install guide.

Comment 5 Stephen Benjamin 2016-10-14 16:19:15 UTC
Created redmine issue http://projects.theforeman.org/issues/16955 from this bug

Comment 7 Bryan Kearney 2018-09-04 18:06:14 UTC
Thank you for your interest in Satellite 6. We have evaluated this request, and we do not expect this to be implemented in the product in the foreseeable future. We are therefore closing this out as WONTFIX. If you have any concerns about this, please feel free to contact Rich Jerrido or Bryan Kearney. Thank you.

Note You need to log in before you can comment on or make changes to this bug.