|Summary:||Impossible to retrieve results for large jobs using command line tools|
|Product:||[Community] Beaker||Reporter:||Hubert Kario <hkario>|
|Status:||CLOSED WONTFIX||QA Contact:||tools-bugs <tools-bugs>|
|Version:||26||CC:||azelinka, bpeck, cbouchar, mastyk, tklohna|
|Fixed In Version:||Doc Type:||If docs needed, set a value|
|Doc Text:||Story Points:||---|
|Last Closed:||2019-10-01 17:50:02 UTC||Type:||Bug|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
|Cloudforms Team:||---||Target Upstream Version:|
|Bug Depends On:|
|Bug Blocks:||1731115, 1731116|
Description Hubert Kario 2019-07-16 11:42:50 UTC
Description of problem: If a job includes a lot of test phases (few thousand), retrieving them using command line tool is not possible, the command timeouts or returns XML-RPC error Version-Release number of selected component (if applicable): 26.5 How reproducible: always Steps to Reproduce: 1. create a job that uploads few thousand results to beaker 2. try to retrieve it using bkr job-results 3. Actual results: timeout or XML-RPC fault: <type 'exceptions.MemoryError'> Expected results: results of the job, like for smaller tasks Additional info:
Comment 2 Tomas Klohna 🔧 2019-07-16 15:55:18 UTC
Hello Hubert, I believe we talked about this on the meetup. You are already hitting memory limit on the server, there is very little we can do about it. --- There are two workarounds: 1) Use bkr job-results <Job/Recipe/Task ID> --no-logs which will output the results without log info 2) Use bkr job-results <Recipe/Task ID> which will output info directly for the recipe/task you're asking for You can pipe these two (job-results contains ID as well) and that way you can ask (for example) only for logs for aborted tasks.
Comment 3 Hubert Kario 2019-07-16 17:35:10 UTC
Yes, I mentioned issues with those tasks on the meetup. No, I didn't hit this specific issue before the meetup. I'm hitting it because I'm using tools like beaker-jobwatch or tcms-results. I don't use `bkr job-results` directly.
Comment 4 Tomas Klohna 🔧 2019-07-16 18:22:37 UTC
Then I recommend opening up an issue with them and pointing them to this ticket
Comment 5 Tomas Klohna 🔧 2019-10-01 13:05:28 UTC
Hubert, would you mind if I close this? I see that the provided workaround was implemented.
Comment 6 Hubert Kario 2019-10-01 17:48:25 UTC
I'm sorry for the late reply. Yes, it's fixed now; large jobs are handled correctly now.
Comment 7 Tomas Klohna 🔧 2019-10-01 17:50:02 UTC