Bug 688662 - Error in client.log on pulp-client bind
Summary: Error in client.log on pulp-client bind
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Pulp
Classification: Retired
Component: user-experience
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Jeff Ortel
QA Contact: Preethi Thomas
URL:
Whiteboard:
Depends On:
Blocks: verified-to-close
TreeView+ depends on / blocked
 
Reported: 2011-03-17 16:32 UTC by Preethi Thomas
Modified: 2013-09-09 16:31 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2011-08-16 12:10:34 UTC
Embargoed:


Attachments (Terms of Use)

Description Preethi Thomas 2011-03-17 16:32:42 UTC
Description of problem:

I see the following error in client.log on consumer bind
ConnectionError: Enqueue capacity threshold exceeded on queue "49ba2548-098b-4784-834b-98323b2d113a:0.0". (JournalImpl.cpp:616)(501)



[root@preethi ~]# rpm -q pulp
pulp-0.0.151-1.fc14.noarch


1. Pulp server
2, Have 2 CDS configured
3. CDS 1 associated with repo1 & 2
4. client bound to repo1 
4. CDS 2 associated with repo2
5. trying to bind client to repo2


client.log

2011-03-17 12:24:24,287 [ERROR][Actions] __call__() @ action.py:117 - Enqueue capacity threshold exceeded on queue "49ba2548-098b-4784-834b-98323b2d113a:0.0". (JournalImpl.cpp:616)(501)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/gofer/agent/action.py", line 115, in __call__
    self.target()
  File "/usr/lib/gofer/plugins/pulp.py", line 75, in heartbeat
    p.send(topic, ttl=delay, agent=myid, next=delay)
  File "/usr/lib/python2.7/site-packages/gofer/messaging/producer.py", line 56, in send
    sender = self.session().sender(address)
  File "<string>", line 6, in sender
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 576, in sender
    sender._ewait(lambda: sender.linked)
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 783, in _ewait
    result = self.session._ewait(lambda: self.error or predicate(), timeout)
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 550, in _ewait
    result = self.connection._ewait(lambda: self.error or predicate(), timeout)
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 194, in _ewait
    self.check_error()
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 187, in check_error
    raise self.error
ConnectionError: Enqueue capacity threshold exceeded on queue "49ba2548-098b-4784-834b-98323b2d113a:0.0". (JournalImpl.cpp:616)(501)

Comment 1 Preethi Thomas 2011-03-17 18:55:19 UTC
So I checked my pulp-server and the cpu is showing near 100%

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
27440 root      20   0  751m 118m 6660 S 99.6  2.0 574:42.12 python

Comment 2 John Matthews 2011-03-17 19:00:42 UTC
On Preethi's box for comment #1 this is what we saw with strace.

# strace -p 27440
Process 27440 attached - interrupt to quit
futex(0x26e9cc0, FUTEX_WAIT_PRIVATE, 0, NULL

Comment 3 Jeff Ortel 2011-03-18 22:17:08 UTC
Something hosed up inside QPID.


=====
Send on address:

heartbeat;{create:always,node:{type:topic,durable:True},link:{durable:True,x-declare:{arguments:{no-local:True}}}})

Trace:

  File "/usr/lib/python2.7/site-packages/gofer/messaging/producer.py", line 57, in send
    sender = self.session().sender(address)
  File "<string>", line 6, in sender
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 576, in sender
    sender._ewait(lambda: sender.linked)
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 783, in _ewait
    result = self.session._ewait(lambda: self.error or predicate(), timeout)
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 550, in _ewait
    result = self.connection._ewait(lambda: self.error or predicate(), timeout)
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 194, in _ewait
    self.check_error()
  File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 187, in check_error
    raise self.error
ConnectionError: Enqueue capacity threshold exceeded on queue "49ba2548-098b-4784-834b-98323b2d113a:0.0". (JournalImpl.cpp:616)(501)


qpid-cpp-client-ssl-0.7.946106-3.fc14.x86_64
qpid-cpp-client-0.7.946106-3.fc14.x86_64
qpid-cpp-server-store-0.7.946106-3.fc14.x86_64
qpid-cpp-server-ssl-0.7.946106-3.fc14.x86_64
python-qpid-0.7.946106-10.fc14.noarch
qpid-cpp-server-0.7.946106-3.fc14.x86_64

The queue appears to be empty:

qpid: show 381
Object of type: org.apache.qpid.broker:queue:_data(3a30c319-5ef2-f211-ba1f-4900d5f75435)
    Attribute              381
    =================================================================
    vhostRef               477
    name                   49ba2548-098b-4784-834b-98323b2d113a:0.0
    durable                True
    autoDelete             False
    exclusive              False
    arguments              {u'no-local': 1}
    altExchange            478
    msgTotalEnqueues       11
    msgTotalDequeues       11
    msgTxnEnqueues         0
    msgTxnDequeues         0
    msgPersistEnqueues     11
    msgPersistDequeues     11
    msgDepth               0
    byteDepth              0
    byteTotalEnqueues      1782
    byteTotalDequeues      1782
    byteTxnEnqueues        0
    byteTxnDequeues        0
    bytePersistEnqueues    1782
    bytePersistDequeues    1782
    consumerCount          0
    consumerCountHigh      0
    consumerCountLow       0
    bindingCount           2
    bindingCountHigh       2
    bindingCountLow        2
    unackedMessages        0
    unackedMessagesHigh    0
    unackedMessagesLow     0
    messageLatencySamples  0s
    messageLatencyMin      0s
    messageLatencyMax      0s
    messageLatencyAverage  0s

Comment 4 Jeff Ortel 2011-03-18 22:28:57 UTC
(In reply to comment #2)
> On Preethi's box for comment #1 this is what we saw with strace.
> 
> # strace -p 27440
> Process 27440 attached - interrupt to quit
> futex(0x26e9cc0, FUTEX_WAIT_PRIVATE, 0, NULL

Not surprised to see this but how can waiting on a MUTEX peg the CPU?

Comment 5 Ted Ross 2011-03-23 15:02:24 UTC
Update:  The file system (/) where the messaging journal is stored is full on this system.  This is likely causing problems related to durable queues.

Comment 6 Preethi Thomas 2011-05-12 19:15:30 UTC
verified

pulp-0.0.174-1.fc14.noarch
[root@preethi ~]# rpm -qa |grep pulp
pulp-common-0.0.174-1.fc14.noarch
pulp-0.0.174-1.fc14.noarch
pulp-client-0.0.174-1.fc14.noarch

[root@pulp-cds ~]# rpm -q pulp-cds
pulp-cds-0.0.174-1.fc14.noarch

Comment 7 Preethi Thomas 2011-08-16 12:10:34 UTC
Closing with Community Release 15

pulp-0.0.223-4.

Comment 8 Preethi Thomas 2011-08-16 12:22:30 UTC
Closing with Community Release 15

pulp-0.0.223-4.


Note You need to log in before you can comment on or make changes to this bug.