Bug 1418026 - goferd errors with "[...] Condition('amqp:resource-limit-exceeded', 'local-idle-timeout expired')" when pushing Errata from Satellite
Summary: goferd errors with "[...] Condition('amqp:resource-limit-exceeded', 'local-id...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Qpid
Version: 6.2.7
Hardware: All
OS: Linux
unspecified
medium vote
Target Milestone: 6.5.0
Assignee: Justin Sherrill
QA Contact: Jan Hutař
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-31 16:19 UTC by Ben
Modified: 2019-11-05 22:24 UTC (History)
24 users (show)

Fixed In Version: gofer-2.12.3, gofer-2.11.7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1646736 (view as bug list)
Environment:
Last Closed: 2019-05-14 12:36:19 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:1222 None None None 2019-05-14 12:36:28 UTC

Description Ben 2017-01-31 16:19:37 UTC
Description of problem:
When executing Errata installation from Satellite, whilest the RPM installations take place, the following error is seen in the Content Host's /var/log/messages:

Jan 31 15:40:51 client1 goferd: [ERROR][worker-0] gofer.messaging.adapter.proton.reliability:53 - Connection amqps://satellite1:5647 disconnected: Condition('amqp:resource-limit-exceeded', 'local-idle-timeout expired')

Version-Release number of selected component (if applicable):

Content Host:
gofer-2.7.6-1.el7sat.noarch
python-gofer-proton-2.7.6-1.el7sat.noarch
python-gofer-2.7.6-1.el7sat.noarch
katello-agent-2.5.0-5.el7sat.noarch

Satellite Host:
qpid-proton-c-0.9-16.el7.x86_64
libqpid-dispatch-0.4-21.el7sat.x86_64
qpid-cpp-server-linearstore-0.30-11.el7sat.x86_64
python-qpid-0.30-9.el7sat.noarch
tfm-rubygem-qpid_messaging-0.30.0-7.el7sat.x86_64
python-qpid-qmf-0.30-5.el7.x86_64
qpid-dispatch-router-0.4-21.el7sat.x86_64
qpid-cpp-client-devel-0.30-11.el7sat.x86_64
qpid-cpp-client-0.30-11.el7sat.x86_64
qpid-qmf-0.30-5.el7.x86_64
satellite1-qpid-router-client-1.0-1.noarch
satellite1-qpid-client-cert-1.0-1.noarch
satellite1-qpid-router-server-1.0-1.noarch
python-gofer-qpid-2.7.6-1.el7sat.noarch
qpid-cpp-server-0.30-11.el7sat.x86_64
qpid-tools-0.30-4.el7.noarch
satellite1-qpid-broker-1.0-1.noarch

How reproducible:

Every Errata installation action initiated in the Satellite GUI.


Steps to Reproduce:
1. Publish Content View with new RPMs
2. Execute a Collection Action -> Errata Installation against a Content Host
3. Watch /var/log/messages on a Content Host

Actual results:

Errata are installed, and Satellite claims task completed successfully, but "ERROR" in /var/log/messages on Content Host.


Expected results:

Well, no error in /var/log/messages on Content Host.


Additional info:

Messages on the Satellite host at around the same time associated with the consumer "client1":

Jan 31 15:40:19 satellite1 pulp: pulp.plugins.pulp_rpm.plugins.profilers.yum:INFO: Rpms: <[{u'src': u'firefox-45.7.0-1.el7_3.src.rpm', u'name': u'firefox', u'sum': [u'sha256', u'cd21ad7f5a7a75449df379832e08f513424a1f87d6e9d7e977c2c58e7edb0e7d'], u'filename': u'firefox-45.7.0-1.el7_3.x86_64.rpm', u'epoch': u'0', u'version': u'45.7.0', u'release': u'1.el7_3', u'arch': u'x86_64'}, {u'src': u'firefox-45.7.0-1.el7_3.src.rpm', u'name': u'firefox', u'sum': [u'sha256', u'cd21ad7f5a7a75449df379832e08f513424a1f87d6e9d7e977c2c58e7edb0e7d'], u'filename': u'firefox-45.7.0-1.el7_3.x86_64.rpm', u'epoch': u'0', u'version': u'45.7.0', u'release': u'1.el7_3', u'arch': u'x86_64'}]> were found to be related to errata <Unit [key={'errata_id': u'RHSA-2017:0190'}] [type=erratum] [id=d29b9249-b2d2-458b-8e62-d98e7b8276f4]> and applicable to consumer <927de638-7bf9-4c26-9716-6ed42ca2e2cb>
Jan 31 15:41:02 satellite1 pulp: pulp.server.agent.direct.services:INFO: (43494-08352)   user data : {'task_id': 'af54453a-52fd-4264-ba05-d89d1f0a905e', 'consumer_id': '927de638-7bf9-4c26-9716-6ed42ca2e2cb'}

It _looks_ like the issue in the attached Foreman bug.  On my Satellite host I have "idle-timeout-seconds: 0" in /etc/qpid-dispatch/qdrouterd.conf...  I haven't tried commenting that out, yet.

Comment 1 pm-sat@redhat.com 2017-01-31 17:09:53 UTC
Upstream bug assigned to jsherril@redhat.com

Comment 2 pm-sat@redhat.com 2017-01-31 17:09:56 UTC
Upstream bug assigned to jsherril@redhat.com

Comment 5 pm-sat@redhat.com 2017-09-13 20:10:25 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/16897 has been resolved.

Comment 9 Pavel Moravec 2018-04-24 07:27:02 UTC
Reproducer:

Have some RHEL system with many updates applicable (extreme example: RHEL7.0 (RHEL7.4 GA suffices) system subscribed to rhel-7-server-rpms repo) and run:

hammer host package upgrade-all --host <hostname>


Then logs have:

Apr 24 09:17:20 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.rmi.dispatcher:603 - call: Content.update() sn=b4ebaef8-4836-4a09-9d4a-f3b5ce01d91c data={u'task_id': u'05a4efdf-8194-499f-ab88-6f84212b254b', u'consumer_id': u'507d2144-b02c-4189-97b8-cd7c45411528'}
..
Apr 24 09:22:45 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.agent.rmi:193 - Request: b4ebaef8-4836-4a09-9d4a-f3b5ce01d91c, committed
Apr 24 09:22:45 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.agent.rmi:147 - Request: b4ebaef8-4836-4a09-9d4a-f3b5ce01d91c processed in: 5.455 (minutes)
Apr 24 09:22:45 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [ERROR][worker-0] gofer.messaging.adapter.proton.reliability:53 - Connection amqps://pmoravec-sat62-rhel7.gsslab.brq2.redhat.com:5647 disconnected: Condition('amqp:resource-limit-exceeded', 'local-idle-timeout expired')
Apr 24 09:22:55 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.messaging.adapter.proton.connection:131 - closed: proton+amqps://pmoravec-sat62-rhel7.gsslab.brq2.redhat.com:5647
Apr 24 09:22:55 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.messaging.adapter.connect:28 - connecting: proton+amqps://pmoravec-sat62-rhel7.gsslab.brq2.redhat.com:5647
Apr 24 09:22:55 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.messaging.adapter.proton.connection:87 - open: URL: amqps://pmoravec-sat62-rhel7.gsslab.brq2.redhat.com:5647|SSL: ca: /etc/rhsm/ca/katello-default-ca.pem|key: None|certificate: /etc/pki/consumer/bundle.pem|host-validation: None
Apr 24 09:22:56 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.messaging.adapter.proton.connection:92 - opened: proton+amqps://pmoravec-sat62-rhel7.gsslab.brq2.redhat.com:5647
Apr 24 09:22:56 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.messaging.adapter.connect:30 - connected: proton+amqps://pmoravec-sat62-rhel7.gsslab.brq2.redhat.com:5647
Apr 24 09:22:57 pmoravec-rhel74.gsslab.brq2.redhat.com goferd[30091]: [INFO][worker-0] gofer.messaging.adapter.proton.connection:131 - closed: proton+amqps://pmoravec-sat62-rhel7.gsslab.brq2.redhat.com:5647
..

Anyway the error in client's /var/log/messages is worth ignoring as the errata/update request itself succeeded well. Just with some connection bounce between goferd and qdrouterd.


Is this correct, please, that the errata apply succeeded well?

Comment 10 Ben 2018-05-08 08:52:45 UTC
I _think_ my errata applied successfully.  As you say, I think there's just some connection bounce, and then things eventually work.

Comment 16 Pavel Moravec 2018-07-25 06:46:17 UTC
(In reply to Pavel Moravec from comment #15)
> I second this. Some connection bounces might happen, these are not erroneous
> itself, rather a heads up / warning. An error would be if subsequent
> connection attempt would fail.


A bit relevant bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1608217

Comment 32 errata-xmlrpc 2019-05-14 12:36:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:1222


Note You need to log in before you can comment on or make changes to this bug.