Bug 510747
| Summary: | Out of Bounds exception when sending large QMF response | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise MRG | Reporter: | Matthew Farrellee <matt> | ||||||||
| Component: | qpid-qmf | Assignee: | Ted Ross <tross> | ||||||||
| Status: | CLOSED ERRATA | QA Contact: | Jan Sarenik <jsarenik> | ||||||||
| Severity: | urgent | Docs Contact: | |||||||||
| Priority: | urgent | ||||||||||
| Version: | 1.1.1 | CC: | fnadge, freznice, iboverma, jsarenik, tross | ||||||||
| Target Milestone: | 1.3 | ||||||||||
| Target Release: | --- | ||||||||||
| Hardware: | All | ||||||||||
| OS: | Linux | ||||||||||
| Whiteboard: | |||||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||||
| Doc Text: |
Previously, a QMF method would exit with a segmentation fault when the result was larger than 64kB. With this update, this method works as expected, even for larger results.
|
Story Points: | --- | ||||||||
| Clone Of: | Environment: | ||||||||||
| Last Closed: | 2010-10-14 16:01:58 UTC | Type: | --- | ||||||||
| Regression: | --- | Mount Type: | --- | ||||||||
| Documentation: | --- | CRM: | |||||||||
| Verified Versions: | Category: | --- | |||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||
| Embargoed: | |||||||||||
| Attachments: |
|
||||||||||
|
Description
Matthew Farrellee
2009-07-10 15:03:50 UTC
*** Bug 508145 has been marked as a duplicate of this bug. *** Fix committed upstream at revision 929716. May I ask you for more info, please? An example would be very appreciated. Raising NEEDINFO. I ran into the bug when submitting jobs to a schedd and then querying for all of them. However, the broker has an echo method. You may be able to simply send >64K of data to that method to reproduce. There are some uncertainities:
1. There is no condor_job_server in 1.1.1 Grid release.
2. When I try to run 1.3RC Grid against 1.1.1 broker,
I can not access grid objects via qpid-tool (either
1.3RC or 1.1.1) and I think it may be caused by
QMF versions.
How should I verify it?
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
Previously,the Qpid Management Framework (QMF) method would exit with a segmentation fault when the result was larger than 10 MB.
Technical note updated. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
Diffed Contents:
@@ -1 +1 @@
-Previously,the Qpid Management Framework (QMF) method would exit with a segmentation fault when the result was larger than 10 MB.+Previously,the Qpid Management Framework (QMF) method would exit with a segmentation fault when the result was larger than 64kB. With this update, this method works as expected, even for larger results.
Technical note updated. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
Diffed Contents:
@@ -1 +1 @@
-Previously,the Qpid Management Framework (QMF) method would exit with a segmentation fault when the result was larger than 64kB. With this update, this method works as expected, even for larger results.+Previously, a QMF method would exit with a segmentation fault when the result was larger than 64kB. With this update, this method works as expected, even for larger results.
Created attachment 452111 [details] Verification scripts During phone meeting I was told this bug does not have to be reproduced to verify it is working. [user@host bz601828]$ ./runtest.sh x86_64 redhat-release-5Server-5.5.0.2 condor-qmf-7.4.4-0.16.el5 python-qpid-0.7.946106-14.el5 python-qmf-0.7.946106-13.el5 Clean: . Submit: ..Submitting job(s)............ 12 job(s) submitted to cluster 1. Verify: ...SUCCESS I will have to finish it for RHEL4 and RHEL5 i386 tomorrow. Cleaning NEEDINFO. x86_64 redhat-release-4AS-9 condor-qmf-7.4.4-0.16.el4 python-qpid-0.7.946106-14.el4 python-qmf-0.7.946106-13.el4 Clean: . Submit: ..Submitting job(s)............ 12 job(s) submitted to cluster 1. Verify: SUCCESS i686 redhat-release-4AS-9 condor-qmf-7.4.4-0.16.el4 python-qpid-0.7.946106-14.el4 python-qmf-0.7.946106-13.el4 qpid-cpp-server-0.7.946106-17.el4 Clean: . Submit: ..Submitting job(s)............ 12 job(s) submitted to cluster 1. Verify: SUCCESS It is vital to set auth=no on the broker for Condor QMF agents to appear on all the latest qpid-cpp-server-0.7.946106-17 builds. The test should run under an unprivileged user which has sudo NOPASSWORD right, see sudo(8) manual page for more info. Created attachment 452296 [details]
Updated verification scripts
$ ./runtest.sh 100
i686
redhat-release-5Server-5.5.0.2
qpid-cpp-server-0.7.946106-17.el5
condor-qmf-7.4.4-0.16.el5
python-qpid-0.7.946106-14.el5
python-qmf-0.7.946106-13.el5
Clean: .
Submit: ..Submitting job(s)....................................................................................................
100 job(s) submitted to cluster 1.
Verify: SUCCESS
Verified on all supported architectures and RHEL versions. Created attachment 452353 [details]
Fixed verification scripts
Sorry, the test was deleting /var/lib/qpidd which contains SASL database.
Here are the updated scripts. No need for auth=no anymore.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2010-0773.html |