New commit detected on ManageIQ/manageiq/gaprindashvili: https://github.com/ManageIQ/manageiq/commit/c31d36cc350fb8e084ee0a5e5125113d6cbd1b36 commit c31d36cc350fb8e084ee0a5e5125113d6cbd1b36 Author: Oleg Barenboim <chessbyte> AuthorDate: Thu Jan 25 11:27:18 2018 -0500 Commit: Satoe Imaishi <simaishi> CommitDate: Fri Jan 26 10:11:20 2018 -0500 Merge pull request #16826 from gtanzillo/radar Radar Project POC (cherry picked from commit e7a1398e57dbd07fcbb53891bce3d7b9599bfe09) https://bugzilla.redhat.com/show_bug.cgi?id=1539074 tools/radar/capture_radar.rb | 38 +++++ tools/radar/report_radar.rb | 61 +++++++ tools/radar/rollup_radar_mixin.rb | 83 +++++++++ tools/radar/rollup_radar_mixin_spec.rb | 304 +++++++++++++++++++++++++++++++++ 4 files changed, 486 insertions(+) create mode 100755 tools/radar/capture_radar.rb create mode 100755 tools/radar/report_radar.rb create mode 100644 tools/radar/rollup_radar_mixin.rb create mode 100644 tools/radar/rollup_radar_mixin_spec.rb
Gregg, so where is this report found? What are the specifics around capturing the data. I assume it looks something like the following, could you please fill in the gaps. 1. deploy CF (does not require to be on top of OCP) 2. manage OCP 3.7 environment (including metric collection) 3. provision some number of EAP containers tagging them with some unique label **** are there any requirements around this, they have to start with something or be of a certain type? 4. generate some load on them 5. repeat 3 & 4 changing the number and load of the containers for a second, third, etc hour 6. run the report **** how do we do this? Where is it? Does it take the label to be measure, etc? 7. then verify that the cumulative core counts by hour reported in the CF report matched the spiked core counts that QE provisioned.
Dave, that's correct. Here are the details of how the collection and calculation of the usage was implemented - At a high level the captured usage is defined by - calculating the maximum usage in a given clock hour, summing all usage by containers spawned from images with a label of com.redhat.component on real time metrics data collected by CloudForms. This is done by grouping real time metrics data of all containers running images of the same label, summing the usage at every 20 second interval and selecting the maximum CPU cores usage value within each hour of each OpenShift label. Only the discovered maximum value per label is stored. Included with the tool is a script (capture_radar.rb) that will capture and calculate the max CPU cores usage of all containers running the same image label. That script should run every hour. Instructions for setting that up below. There is another script (report_radar.rb) for generating the report in CSV format. The output format of the report is a csv file named report_radar.csv. Here are the details that go with items 6 and 7 of comment #3 6. Setup the tool to start capturing and caltulating usages date for the report - Create a cron job to run the capture script every hour. Edit vi /etc/crontab and add this line to the bottom of the file - 15 * * * * root cd /var/www/miq/vmdb && source /etc/default/evm && bundle exec tools/radar/capture_radar.rb This will start capturing data at the 15 minute mark of the next hour and every hour thereafter. You’ll need to let it run for at least an hour or more before you’ll be able to see anything on the report. The data is stored on the appliance file system in a sqlite3 DB named ./db/radar.sqlite3. 6a. Run the report For script usage run - ./tools/radar/report_radar.rb —help To run for 1 day, for example - ./tools/radar/report_radar.rb -d 1 The script will generate a csv file in the current directory named report_radar.csv.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:0380