Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
Since some recent Sat6 version (6.6?), mongo was updated such that it logs to /var/log/messages plenty of logs like:
Mar 18 15:35:25 pmoravec-sat65-on-rhev mongod.27017[2114]: [conn312] command pulp_database.units_erratum appName: "MongoDB Shell" command: getMore { getMore: 172912036936, collection: "units_erratum" } originatingCommand: { find: "units_erratum", filter: {} } planSummary: COLLSCAN cursorid:172912036936 keysExamined:0 docsExamined:7900 cursorExhausted:1 numYields:63 nreturned:7901 reslen:15332981 locks:{ Global: { acquireCount: { r: 128 } }, Database: { acquireCount: { r: 64 } }, Collection: { acquireCount: { r: 64 } } } protocol:op_command 101ms
Logs are spammed by those "slow" operation log entries. We should increase it to e.g. 500ms, like the default for postgres is.
Operations from range 100ms to 500ms are usually not interesting for debugging purposes, so we can drop logging them.
To apply the change, it is enough to add to /etc/opt/rh/rh-mongodb34/mongod.conf :
operationProfiling.slowOpThresholdMs: 500
Upstream doc: for this: https://docs.mongodb.com/manual/reference/configuration-options/#operationProfiling.slowOpThresholdMs
Version-Release number of selected component (if applicable):
Sat 6.6 or newer
How reproducible:
100%
Steps to Reproduce:
1. let use Sat6 normally, e.g. sync a repo
2. search for mongod.27017 logs in /var/log/messages (or journal logs) and spot the lowest durations:
grep mongod.27017 /var/log/messages | awk '{ print $NF }' | grep ms$ | sort -rn
Actual results:
2. there are usually plenty of logs, often of very huge length ("get me details about units_rpm for these 1000 unit_ids: .."), shortest length is:
# grep mongod.27017 /var/log/messages | awk '{ print $NF }' | grep ms$ | sort -rn
..
102ms
102ms
102ms
102ms
101ms
101ms
100ms
100ms
100ms
#
Expected results:
Just operations taking >=500ms to be logged
Additional info:
I know (and am glad) we will get rid of mongo in Sat7. If you decide we wont fix this BZ before that time, let close it. But the logs are bothering.
ewoud++ as this can be workarounded via:
mongodb::server::config_data:
operationProfiling.slowOpThresholdMs: 500
in /etc/foreman-installer/custom-hiera.yaml . Created https://access.redhat.com/solutions/4915491 from this. Still it makes sense to have this as a default, imho.
Thanks for raising this bugzilla and documenting the workaround in the KCS.
Based upon discussion in core triage, the team felt it unlikely that this will get addressed prior to the removal of mongo. Since there is a well-defined workaround, the recommendation was to close the bugzilla for now. That said, if a solution is provided upstream, we can certainly pull it in. Thanks!
Description of problem: Since some recent Sat6 version (6.6?), mongo was updated such that it logs to /var/log/messages plenty of logs like: Mar 18 15:35:25 pmoravec-sat65-on-rhev mongod.27017[2114]: [conn312] command pulp_database.units_erratum appName: "MongoDB Shell" command: getMore { getMore: 172912036936, collection: "units_erratum" } originatingCommand: { find: "units_erratum", filter: {} } planSummary: COLLSCAN cursorid:172912036936 keysExamined:0 docsExamined:7900 cursorExhausted:1 numYields:63 nreturned:7901 reslen:15332981 locks:{ Global: { acquireCount: { r: 128 } }, Database: { acquireCount: { r: 64 } }, Collection: { acquireCount: { r: 64 } } } protocol:op_command 101ms Logs are spammed by those "slow" operation log entries. We should increase it to e.g. 500ms, like the default for postgres is. Operations from range 100ms to 500ms are usually not interesting for debugging purposes, so we can drop logging them. To apply the change, it is enough to add to /etc/opt/rh/rh-mongodb34/mongod.conf : operationProfiling.slowOpThresholdMs: 500 Upstream doc: for this: https://docs.mongodb.com/manual/reference/configuration-options/#operationProfiling.slowOpThresholdMs Version-Release number of selected component (if applicable): Sat 6.6 or newer How reproducible: 100% Steps to Reproduce: 1. let use Sat6 normally, e.g. sync a repo 2. search for mongod.27017 logs in /var/log/messages (or journal logs) and spot the lowest durations: grep mongod.27017 /var/log/messages | awk '{ print $NF }' | grep ms$ | sort -rn Actual results: 2. there are usually plenty of logs, often of very huge length ("get me details about units_rpm for these 1000 unit_ids: .."), shortest length is: # grep mongod.27017 /var/log/messages | awk '{ print $NF }' | grep ms$ | sort -rn .. 102ms 102ms 102ms 102ms 101ms 101ms 100ms 100ms 100ms # Expected results: Just operations taking >=500ms to be logged Additional info: I know (and am glad) we will get rid of mongo in Sat7. If you decide we wont fix this BZ before that time, let close it. But the logs are bothering.