Bug 1452364 - Max CPU Usage and Memory Usage values 'Not Available' for VMware VMs
Summary: Max CPU Usage and Memory Usage values 'Not Available' for VMware VMs
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: C&U Capacity and Utilization
Version: 5.8.0
Hardware: All
OS: All
medium
medium
Target Milestone: GA
: cfme-future
Assignee: James Wong
QA Contact: Tasos Papaioannou
URL:
Whiteboard: c&u:NOR
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-18 18:50 UTC by Tasos Papaioannou
Modified: 2018-12-11 16:54 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-12-11 16:54:35 UTC
Category: ---
Cloudforms Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Tasos Papaioannou 2017-05-18 18:50:38 UTC
Description of problem:

For VMware VMs with historical C&U data, the Max values for CPU Usage and Memory Usage show as Not Available. All other values display as expected. For example, the Normal Operating Ranges frame shows something like this:

---
CPU 	  	 
	Max 	131.81 MHz
	High 	69.71 MHz
	Average 49.65 MHz
	Low 	29.59 MHz
CPU Usage 	  	 
	Max 	Not Available
	High 	6.38%
	Average 2.51%
	Low 	0.00%
Memory 	  	 
	Max 	92.31 MB
	High 	80.04 MB
	Average 67.78 MB
	Low 	55.51 MB 

Memory Usage 	  	 
	Max 	Not Available
	High 	2.85%
	Average 2.26%
	Low 	1.66% 
---


Version-Release number of selected component (if applicable):

5.8.0.15

How reproducible:

100%

Steps to Reproduce:
1.) On a CFME appliance w/ C&U processing enabled, add an existing VMware provider that has historical VM data.
2.) See that Max CPU Usage and Max Memory Usage show as Not Available for the VMs.

Actual results:

Max values for CPU Usage and Memory Usage show as Not Available.

Expected results:

Correct Max values for CPU Usage and Memory Usage display.

Additional info:

The Max values are calculated from the values of :abs_max_cpu_usage_rate_average_value and :abs_max_mem_usage_absolute_average_value in metric_rollups.min_max:

****
./app/models/vm_or_template/right_sizing.rb:

  def max_cpu_usage_rate_average_max_over_time_period
    perfs = VimPerformanceAnalysis.find_perf_for_time_period(self, "daily", :end_date => Time.now.utc, :days => Metric::LongTermAverages::AVG_DAYS)
    perfs.collect do |p|
      # Ignore any CPU bursts to 100% 15 minutes after VM booted
      next if (p.abs_max_cpu_usage_rate_average_value == 100.0) && boot_time && (p.abs_max_cpu_usage_rate_average_timestamp <= (boot_time + 15.minutes))
      p.abs_max_cpu_usage_rate_average_value
    end.compact.max
  end


  def max_mem_usage_absolute_average_max_over_time_period
    perfs = VimPerformanceAnalysis.find_perf_for_time_period(self, "daily", :end_date => Time.now.utc, :days => Metric::LongTermAverages::AVG_DAYS)
    perfs.collect(&:abs_max_mem_usage_absolute_average_value).compact.max
  end
****

but these values do not exist for VMware VMs. The average, high, and low values, on the other hand, are calculated from :max_cpu_usage_rate_average and :max_mem_usage_absolute_average, which do exist, so those values display correctly.

Comment 3 Tasos Papaioannou 2017-06-22 15:08:51 UTC
The issue appears to be specifically for historical performance data imported after adding the vmware provider. The historical hourly entries added to metric_rollups have empty min_max columns, and daily entries are missing abs_*_{value,timestamp} entries in min_max. Hourly rollups performed after initial import do have the abs_*_{value,timestamp} entries in min_max, which then get rolled up into the subsequent daily rollups.

Comment 4 James Wong 2017-12-12 17:04:28 UTC
Tasos,

do you mean gap collection by "historical performance data imported"?

regards,
James

Comment 5 Tasos Papaioannou 2017-12-12 17:23:25 UTC
I guess so. I think the description in the BZ is pretty clear, whether you call it "gap collection" or not.


Note You need to log in before you can comment on or make changes to this bug.