Bug 835647

Summary: Allow REST interface to query more than 60 points
Product: [Other] RHQ Project Reporter: Elias Ross <genman>
Component: Monitoring, RESTAssignee: Heiko W. Rupp <hrupp>
Status: CLOSED CURRENTRELEASE QA Contact: Mike Foley <mfoley>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.4CC: hrupp
Target Milestone: ---   
Target Release: RHQ 4.8   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-11 05:52:56 EDT Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Bug Depends On:    
Bug Blocks: 963734    

Description Elias Ross 2012-06-26 13:24:21 EDT
Description of problem:

See: https://community.jboss.org/thread/201667?tstart=0


Assume I don't use the 'raw' REST interface because I need older data. I still follow why the sixty datapoint limitation comes into effect. Which of these modules is limiting the results? This is the call order as I see it:


    public List<List<MeasurementDataNumericHighLowComposite>> getMeasurementDataAggregatesForContext(long beginTime,

        long endTime, EntityContext context, int definitionId, int numDataPoints) throws MeasurementNotFoundException {

Again, if I include more than 60 datapoints, I end up with NaN values and only 60 points. But I still don't see how this can happen since there's no logic I can see to munge it up.

I brought this up in an older forum thread and was told this  be fixed in conjuction with better graphing. But the raw doesn't work for some cases I'm needing.

The limitation appears to be coming from RHQ_NUMBERS table, which only has 60 values...


The query goes like:

SELECT timestamp, max(av), max(peak), max(low) FROM (
   (SELECT timestamp, avg(value) as av, max(value) as peak, min(value) as low FROM (
      (SELECT beginTS as timestamp, value
      FROM (select 1340075965000 + (5040000 * i) as beginTS, i from RHQ_numbers where i < 120) n,

I noticed RHQ_numbers only has 60 values. So I ran the following SQL:

insert into RHQ_NUMBERS values(60);
insert into RHQ_NUMBERS values(119);

and I could now get back 120 datapoints. Which is good. I don't suppose this is the recommended approach to fixing this? Yet, it seems to work fine. (Why is RHQ_numbers needed at all? I suppose it is a SQL thing.)



Numbers table should probably have 500-1000 values. Couldn't hurt I suppose to have more.

Ian Springer writes:
"RHQ_NUMBERS is referred to as a numbers table (see http://sqlblog.com/blogs/adam_machanic/archive/2006/07/12/you-require-a-numbers-table.aspx ). I don't see how it would hurt for us to add more numbers to the table - create a BZ for it."
Comment 1 Heiko W. Rupp 2013-06-05 13:26:26 EDT
Removing the NUMBERS table from the subject as this is no longer 
available/needed for the cassandra backend
Comment 2 Heiko W. Rupp 2013-06-06 07:10:27 EDT
master bfa32dc

This was now possible, as there is no more dependency on the RHQ_NUMBERS table.

In the Cassandra case, we already computed the buckets in memory, so this change more or less just passes a different number of buckets into this generator.
Default is still 60 in cases where no different number is given.
Comment 3 Heiko W. Rupp 2013-09-11 05:52:56 EDT
Bulk closing of old issues now that HRQ 4.9 is in front of the door.

If you think the issue has not been solved, then please open a new bug and mention this one in the description.