This service will be undergoing non-disruptive maintenance at 07:20 UTC, 2018-12-14. It is expected to last approximately 30 minutes
Bug 596677 - New API results in much lower receive rates
Summary: New API results in much lower receive rates
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: python-qpid (Show other bugs)
(Show other bugs)
Version: Development
Hardware: All Linux
high
medium
Target Milestone: 1.3
: ---
Assignee: Rafael H. Schloming
QA Contact: Jan Sarenik
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-05-27 09:54 UTC by Gordon Sim
Modified: 2014-12-01 23:14 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-10-20 11:30:47 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
test using new api (3.48 KB, application/octet-stream)
2010-05-27 09:55 UTC, Gordon Sim
no flags Details
equivalent test using old api (2.23 KB, application/octet-stream)
2010-05-27 09:55 UTC, Gordon Sim
no flags Details

Description Gordon Sim 2010-05-27 09:54:18 UTC
Using a receiver with a capacity of 25 and asynchronous acknowledgement every 10 messages seemed to give the optimal rate for the new python client for me of ~800 msgs/sec.

With a similar setup the old client API shows a much higher rate (~2000 msgs/sec).

Comment 1 Gordon Sim 2010-05-27 09:55:14 UTC
Created attachment 417169 [details]
test using new api

Comment 2 Gordon Sim 2010-05-27 09:55:40 UTC
Created attachment 417170 [details]
equivalent test using old api

Comment 3 Gordon Sim 2010-05-27 09:56:24 UTC
Running the two tests attached show different rates (I pump messages into the queue with qpid_send from the c++ tree).

Comment 4 Justin Ross 2010-05-27 13:59:08 UTC
I'm testing with the qpid_receive script (on a quad-core opteron) and I'm seeing much slower rates.

[jross@nicaea turf]$ ./qpid_receive "dude" -c 10000
Thu May 27 09:53:13 2010: 108.321133024 (108.321133024)
Thu May 27 09:53:41 2010: 36.4354567257 (54.5291922337)
Thu May 27 09:54:11 2010: 33.2829451465 (44.9619937357)

qpid_receive pins a cpu, while the broker is barely loaded.

But I'm burying the lead, perhaps.  I'm testing using the F11 qpidd: qpidd-0.5.829175-3.fc11.x86_64.  Its only config is auth=no.  Despite this being an older qpidd, this degree of slowdown is surprising to me.

I'll run again with a fresh qpidd.

Gordon, what qpidd config are you using for your tests?

Comment 5 Justin Ross 2010-05-27 14:48:20 UTC
To complete the prior results, here is the same test, same qpidd, with qpid_receive_old:

[jross@nicaea turf]$ ./qpid_receive_old
Thu May 27 10:42:03 2010: 1097.42025507 (1097.42025507)
Thu May 27 10:42:04 2010: 793.555749967 (921.073721952)
Thu May 27 10:42:05 2010: 908.085423124 (916.703202459)
Thu May 27 10:42:06 2010: 1104.64385587 (957.426594625)
Thu May 27 10:42:07 2010: 1109.22920769 (984.369606484)
Thu May 27 10:42:08 2010: 1110.39296431 (1003.3486787)

The performance delta here is 10:1 between old and new.

I didn't note it before, but the (new api) qpid_receive case slows to a crawl after a short time.

Comment 6 Justin Ross 2010-05-27 15:17:10 UTC
Now my results with a freshly built qpidd (r948637).

[jross@nicaea turf]$ ./qpid_receive my-queue
Thu May 27 11:05:35 2010: 263.266733895 (263.266733895)
Thu May 27 11:05:54 2010: 52.0708629355 (86.9450782521)
Thu May 27 11:06:29 2010: 29.0594775986 (52.2509644955)
Thu May 27 11:07:27 2010: 17.172135061 (34.5873899344)

[jross@nicaea turf]$ ./qpid_receive_old
Thu May 27 11:08:41 2010: 1023.53421197 (1023.53421197)
Thu May 27 11:08:42 2010: 698.846254137 (830.586579553)
Thu May 27 11:08:44 2010: 698.300575465 (781.253073377)
Thu May 27 11:08:45 2010: 687.873862852 (755.60948486)
Thu May 27 11:08:46 2010: 763.113539571 (757.098465992)

For all of my runs,  I'm using the cpp spout to generate the messages:

  spout.cpp -c 10000 -a "my-queue; {create: always}"

And all runs have been on F11.

The qpid_receive case is evidently bogging down.  When I interrupt it, it's always in this code:

Traceback (most recent call last):
  File "./qpid_receive", line 90, in <module>
    msg = rcv.fetch(timeout=timeout)
  File "<string>", line 6, in fetch
  File "/home/boston/jross/lib/python/qpid/messaging/endpoints.py", line 917, in fetch
    self._ewait(lambda: self.linked)
  File "/home/boston/jross/lib/python/qpid/messaging/endpoints.py", line 881, in _ewait
    result = self.session._ewait(lambda: self.error or predicate(), timeout)
  File "/home/boston/jross/lib/python/qpid/messaging/endpoints.py", line 527, in _ewait
    result = self.connection._ewait(lambda: self.error or predicate(), timeout)
  File "/home/boston/jross/lib/python/qpid/messaging/endpoints.py", line 185, in _ewait
    self.check_error()
  File "/home/boston/jross/lib/python/qpid/messaging/endpoints.py", line 178, in check_error
    raise self.error
qpid.messaging.exceptions.InternalError: Traceback (most recent call last):
  File "/home/boston/jross/lib/python/qpid/messaging/driver.py", line 453, in dispatch
    self.engine.dispatch()
  File "/home/boston/jross/lib/python/qpid/messaging/driver.py", line 740, in dispatch
    self.process(ssn)
  File "/home/boston/jross/lib/python/qpid/messaging/driver.py", line 968, in process
    messages = [m for m in ssn.acked if m not in sst.acked]
KeyboardInterrupt

Comment 7 Gordon Sim 2010-05-27 15:20:35 UTC
I'm running with no special config on qpidd side. Numbers in description were from trunk on f11. On rhel5 I do see much lower results (~300 msgs/sec) which appears to be down to the python version (2.4 v 2.6)

Comment 8 Gordon Sim 2010-05-27 15:21:19 UTC
and qpid_receive_old shows rates of just under 700 msgs/sec on rhel5/python 2.4

Comment 9 Jan Sarenik 2010-06-24 11:54:06 UTC
(For more info on strange messages containing "Ignoring frame"
produced by spout, see bug 607550)

# /usr/sbin/qpidd --auth=no --mgmt-enable=no --no-module-dir -d
# cd /usr/share/qpidc/examples/messaging/
# make
...
# ./spout -c 10000 "my-queue; {create: always}"
...
# python qpid_receive_old
Thu Jun 24 13:09:22 2010: 1610.14818893 (1610.14818893)
Thu Jun 24 13:09:22 2010: 1626.67964607 (1618.37170192)
Thu Jun 24 13:09:23 2010: 1630.35901837 (1622.34783984)
Thu Jun 24 13:09:24 2010: 1631.11097284 (1624.52978374)
Thu Jun 24 13:09:24 2010: 1619.97906617 (1623.61759601)
Thu Jun 24 13:09:25 2010: 1490.14828992 (1599.73677834)
Thu Jun 24 13:09:25 2010: 1490.38180004 (1583.14232538)
Thu Jun 24 13:09:26 2010: 1619.58059996 (1587.60719014)
Thu Jun 24 13:09:27 2010: 1598.99416299 (1588.86439407)
Thu Jun 24 13:09:27 2010: 1625.84983648 (1592.48704338)
# ./spout -c 10000 "my-queue; {create: always}"
...
# python qpid_receive my-queue
Thu Jun 24 13:10:30 2010: 1231.62142337 (1231.62142337)
Thu Jun 24 13:10:31 2010: 1143.07765933 (1185.69880627)
Thu Jun 24 13:10:32 2010: 1027.85294289 (1127.95913179)
Thu Jun 24 13:10:33 2010: 1224.17574784 (1150.56692655)
Thu Jun 24 13:10:34 2010: 1279.1439323 (1174.17201398)
Thu Jun 24 13:10:34 2010: 1283.10556342 (1191.02468641)
Thu Jun 24 13:10:35 2010: 1140.30033345 (1183.50380255)
Thu Jun 24 13:10:36 2010: 1305.48459879 (1197.49006402)
Thu Jun 24 13:10:37 2010: 1102.50300511 (1186.13534487)
Thu Jun 24 13:10:38 2010: 1145.65258541 (1181.95877743)

Verified on: RHEL5 x86_64 and i386
  qpid-cpp-server-0.7.946106-4.el5
  python-qmf-0.7.946106-4.el5
  python-qpid-0.7.946106-2.el5
  qpid-cpp-client-devel-0.7.946106-4.el5
  qpid-tools-0.7.946106-4.el5

 SUMMARY:
            x86_64     |    i386
-----------------------|-----------------
OLD API: 1592.48704338 | 1343.43796218
NEW API: 1181.95877743 | 1092.57196086

Unable to verify on RHEL4 as the qpid_receive is not compatible
with Python found there.

Machine:
  IBM X40 laptop
  Intel(R) Core(TM)2 Duo CPU T7300 @ 2.00GHz, 4kB cache
  2 GB RAM


Note You need to log in before you can comment on or make changes to this bug.