Bug 781470 - Stored Procedure dbo.dwh_history_aggregate_level fails to execute successfully
Summary: Stored Procedure dbo.dwh_history_aggregate_level fails to execute successfully
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Deadline: 2012-03-01
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-dwh
Version: 2.2.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.0.3
Assignee: Yaniv Lavi
QA Contact: David Botzer
URL:
Whiteboard: infra
Depends On:
Blocks: 759133 786475 786477
TreeView+ depends on / blocked
 
Reported: 2012-01-13 15:02 UTC by Juan J. Cavallaro
Modified: 2018-11-26 17:52 UTC (History)
14 users (show)

Fixed In Version: SI7
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 786475 786477 (view as bug list)
Environment:
Last Closed: 2012-12-04 19:58:28 UTC
oVirt Team: Infra
Target Upstream Version:


Attachments (Terms of Use)
Patched dwh_history_aggregate_level Stored Procedure (6.61 KB, text/plain)
2012-01-13 15:16 UTC, Juan J. Cavallaro
no flags Details

Description Juan J. Cavallaro 2012-01-13 15:02:49 UTC
> Description of problem:
Stored Procedure dbo.dwh_history_aggregate_level fails to execute successfully.

> Version-Release number of selected component (if applicable):
RHEV-M 4.5.4.51796 (2.2.4)

> How reproducible:
Always.

> Steps to Reproduce:
1. Insert several records onto the dbo.vm_disk_history table so that SUM (read_rate) and/or SUM (write_rate) exceed 2^32.
2. Execute SP with @AggrStart and @AggrGap providing a range so that SUM (read_rate) and/or SUM (write_rate) still exceed 2^32.
  
> Actual results:
Error 8115, Level 16, State 2, Procedure dwh_history_aggregate_level, Line 87, Message: Arithmetic overflow error converting expression to data type int.

> Expected results:
Successful execution of the SP.

> Additional info:

Issue is triggered by the following SQL statement on dwh_history_aggregate_level, lines 90-97:

--
insert into vm_disk_history 
select @Aggregeation_level, @AggrStart,
vm_guid, image_guid,MAX(actual_size) as actual_size_max,
MAX(size) as size,disk_interface,disk_type,imageStatus,
AVG(read_rate) AS read_rate_avg, AVG(write_rate) AS write_rate_avg
FROM		dbo.vm_disk_history
WHERE history_datetime>=@AggrStart and history_datetime<@AggrEnd
GROUP BY vm_guid, image_guid,imageStatus,disk_interface,disk_type
--

Columns read_rate and write_rate on table vm_disk_history are of type int. According to http://msdn.microsoft.com/en-us/library/ms177677%28v=SQL.90%29.aspx:

'The return type is determined by the type of the evaluated result of expression.'

So for integer data types, the return type will be 'int'.

As AVG () will attempt to return an 'int' if input data type is int, and the sum of all read_rate values exceed 4 bytes length (see http://msdn.microsoft.com/en-us/library/ms187745%28v=SQL.100%29.aspx for data type limits), the call 

Furthermore, the following is mentioned for SQL Server 2008's AVG () (http://msdn.microsoft.com/en-us/library/ms177677%28v=SQL.100%29.aspx):

'AVG () computes the average of a set of values by dividing the sum of those values by the count of nonnull values. If the sum exceeds the maximum value for the data type of the return value an error will be returned.'

Comment 1 Juan J. Cavallaro 2012-01-13 15:08:52 UTC
Workaround:

Replace line 94:

AVG(read_rate) AS read_rate_avg, AVG(write_rate) AS write_rate_avg

With

AVG(cast(read_rate as bigint)) AS read_rate_avg, AVG(cast(write_rate as bigint)) AS write_rate_avg

Attaching proposed patch.

Comment 2 Juan J. Cavallaro 2012-01-13 15:16:58 UTC
Created attachment 555083 [details]
Patched dwh_history_aggregate_level Stored Procedure

Comment 11 Yaniv Lavi 2012-01-25 15:11:47 UTC
A fix for this issue was pushed upstream and available in the link:
http://gerrit.ovirt.org/1254 



Yaniv

Comment 35 David Botzer 2012-11-13 09:27:34 UTC
Fixed, 3.1/si24.1
I have ran on 2 different setups for couple of days big numbers for
Write Rate in Bytes -> for vm_disk_history views
And data is ok and aggregated samples/hourly/daily
Fixed, 3.1/si24.1


Note You need to log in before you can comment on or make changes to this bug.