Bug 681968

Summary: [RFE] use messaging and/or Rest(Json) for bkr.lab.controller.proxy methods
Product: [Retired] Beaker Reporter: Bill Peck <bpeck>
Component: web UIAssignee: Raymond Mancy <rmancy>
Status: CLOSED WONTFIX QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 0.7CC: bpeck, dcallagh, ebaak, mcsontos, rmancy, stl
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-09-26 00:37:53 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 681964    

Description Bill Peck 2011-03-03 18:21:00 UTC
Description of problem:

Currently all communication from the lab controllers to the scheduler happens via xmlprc commands.

Current logs show the following number of requests per hour:

Number of requests per Hour
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 00:' | grep
> Time: | wc -l
> 6635
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 01:' | grep
> Time: | wc -l
> 17477
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 02:' | grep
> Time: | wc -l
> 20708
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 03:' | grep
> Time: | wc -l
> 23062
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 04:' | grep
> Time: | wc -l
> 16378
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 05:' | grep
> Time: | wc -l
> 16270
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 06:' | grep
> Time: | wc -l
> 7731
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 07:' | grep
> Time: | wc -l
> 13300
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 08:' | grep
> Time: | wc -l
> 10360
> [bpeck@lab2 beaker]$ cat proxy.log* | grep '2011-03-03 09:' | grep
> Time: | wc -l
> 16039
> 
> 
> I took the highest one and divided it out to give us the number of
> requests per minute and per second..
> 
> Number of requests per Minute
> [bpeck@lab2 beaker]$ expr 23062 \/ 60
> 384
> Number of requests per Second
> [bpeck@lab2 beaker]$ expr 23062 \/ 60 \/ 60
> 6

I've configured a cron job to continue monitoring how many requests we generate.

2011-03-03 10: 16007
2011-03-03 11: 14538
2011-03-03 12: 13761

The methods in question are as follows:

recipes.register_file
recipes.upload_file
recipes.tasks.result
taskactions.task_info
recipes.stop
recipesets.stop
jobs.stop
recipes.system_xml
recipes.tasks.extend
tasks.to_dict
recipes.by_log_server
recipes.files
recipes.change_files
recipes.tasks.watchdogs
recipes.tasks.register_file 
recipes.tasks.upload_file 
recipes.tasks.start
recipes.tasks.extend
recipes.tasks.result
recipes.tasks.watchdog
recipes.tasks.stop
recipes.tasks.register_result_file
recipes.tasks.result_upload_file 
push
legacypush


Can we support both a Rest API and message bus for these calls?  Certainly moving away from xmlrpc should improve performance.  Would need to do before and after performance metrics to verify this.

This would also allow other tools to listen for Job status and job results, ie: results going directly into TCMS.

Comment 1 Raymond Mancy 2011-03-15 13:03:08 UTC
I'll have to go through each of these methods, but ones that will be a winner are those that are able to be triggered on an event that we are currently polling for frequently. I'd say that those that don't meet that condition perhaps are not the most highest of priorities?

Comment 2 Bill Peck 2011-03-15 19:39:03 UTC
I think watchdogs is the only one that we continuously poll.  

I still think we could see a speed improvement because of the overhead of setting up, encoding, and tearing down an xmlrpc connection for every message.  Especially when you add in kerb authentication.

Comment 3 Raymond Mancy 2011-03-22 07:16:04 UTC
(In reply to comment #0)

> This would also allow other tools to listen for Job status and job results, ie:
> results going directly into TCMS.

This can already be done with the change introduce with the taskwatcher via bus.
All programs like TCMS need to do is to know what to fire up a receiver and know what they are listening for.

Comment 4 Raymond Mancy 2012-09-26 00:37:53 UTC
This was implemented, and then unimplemented.
Another BZ can be opened at a time if we want to reimplement the unimplemented implementation (perhaps with a different messaging bus)