Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
The candlepin event listener will call 'acknowledge' on messages that it processes, but does not call 'release' or 'reject' on messages that it is unable to process.
This can cause some messages to become stuck in the katello_event queue, since they are being held but will never be released.
The best behavior may be to just log the message and then reject it, so potentially bad messages do not get reprocessed over and over.
Version-Release number of selected component (if applicable): 6.2.8
How reproducible: not sure how to repro yet
note: I put this under the 'hosts' component but I'm not sure if that is the best place for this to live. It is related to the ListenOnCandlepinEvents task.
Could you describe a scenario, that this behavior happens in terms of state of the system and example of the messages that lead to the messages that are not being processed
It is difficult to reproduce outside of a production environment, but I think if you get candlepin to generate a large number of events and then while the listener is working through them, restart foreman-tasks a few times. It may help to put a longer sleep statement in the listener loop, so there's a longer delay between picking the message up and ACKing it.
The delay between picking up and acking should no be the same as missing release due to error while processing (as it would eventually get acked). What we need is a bactkrace/error message from the case when the message is not processed by Katello. Otherwise we are not able to help with this case. Putting needinfo back to reporter, until we get this info from this customer or somebody else running into the same issue. From what I've seen in the code, I've not found an obvious place where this could happen
One another note:
can't be the cause (of increasing backlog of messages) the limited throughput 1message per second? See https://bugzilla.redhat.com/show_bug.cgi?id=1399877 .
That BZ was fixed in 6.2.7, if the customer behind this BZ is on older Sat release and sending lots of candlepin events, they can be affected by bz1399877 .
(In reply to Chris Duryee from comment #5)
> It is difficult to reproduce outside of a production environment, but I
> think if you get candlepin to generate a large number of events and then
> while the listener is working through them, restart foreman-tasks a few
> times. It may help to put a longer sleep statement in the listener loop, so
> there's a longer delay between picking the message up and ACKing it.
I confirm this reproducer. Bit more straightforward way (just artificial repro but worth for devels to emulate the bug):
1) stop foreman-tasks service (to populate katello_event_queue a bit)
2) generate several hundreds candlepin events (i.e. (un)register a Content Host with an activation key, or remove all and attach back a subscription pool to another Host) - do that in a loop until katello_event_queue has few hundreds of messages
3) start foreman-tasks service - leave step 2) _running_ (at least I did so, it might but not need to be important)
4) Once LOCE task consumes the backlog, check if katello_event_queue has zero queue depth (see #c7)
5) if there some messages constantly acquired but not acknowledged, you got it. Otherwise, goto step 1).
Comment 14Satellite Program
2017-08-09 20:11:46 UTC
Upstream bug assigned to jsherril
Comment 15Satellite Program
2017-08-09 20:11:53 UTC
Upstream bug assigned to jsherril
Comment 17Satellite Program
2017-09-07 22:11:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2018:0273