This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 835647 - Allow REST interface to query more than 60 points
Allow REST interface to query more than 60 points
Status: CLOSED CURRENTRELEASE
Product: RHQ Project
Classification: Other
Component: Monitoring, REST (Show other bugs)
4.4
Unspecified Unspecified
unspecified Severity unspecified (vote)
: ---
: RHQ 4.8
Assigned To: Heiko W. Rupp
Mike Foley
:
Depends On:
Blocks: 963734
  Show dependency treegraph
 
Reported: 2012-06-26 13:24 EDT by Elias Ross
Modified: 2013-09-11 05:52 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-11 05:52:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Elias Ross 2012-06-26 13:24:21 EDT
Description of problem:

See: https://community.jboss.org/thread/201667?tstart=0

>>>

Assume I don't use the 'raw' REST interface because I need older data. I still follow why the sixty datapoint limitation comes into effect. Which of these modules is limiting the results? This is the call order as I see it:

...
src/main/java/org/rhq/enterprise/server/measurement/util/MeasurementDataManagerUtility.java

    public List<List<MeasurementDataNumericHighLowComposite>> getMeasurementDataAggregatesForContext(long beginTime,

        long endTime, EntityContext context, int definitionId, int numDataPoints) throws MeasurementNotFoundException {

Again, if I include more than 60 datapoints, I end up with NaN values and only 60 points. But I still don't see how this can happen since there's no logic I can see to munge it up.

I brought this up in an older forum thread and was told this  be fixed in conjuction with better graphing. But the raw doesn't work for some cases I'm needing.

The limitation appears to be coming from RHQ_NUMBERS table, which only has 60 values...

<<<
 

The query goes like:


SELECT timestamp, max(av), max(peak), max(low) FROM (
   (SELECT timestamp, avg(value) as av, max(value) as peak, min(value) as low FROM (
      (SELECT beginTS as timestamp, value
      FROM (select 1340075965000 + (5040000 * i) as beginTS, i from RHQ_numbers where i < 120) n,
...

I noticed RHQ_numbers only has 60 values. So I ran the following SQL:


insert into RHQ_NUMBERS values(60);
...
insert into RHQ_NUMBERS values(119);

and I could now get back 120 datapoints. Which is good. I don't suppose this is the recommended approach to fixing this? Yet, it seems to work fine. (Why is RHQ_numbers needed at all? I suppose it is a SQL thing.)

...

<<<

Numbers table should probably have 500-1000 values. Couldn't hurt I suppose to have more.

Ian Springer writes:
"RHQ_NUMBERS is referred to as a numbers table (see http://sqlblog.com/blogs/adam_machanic/archive/2006/07/12/you-require-a-numbers-table.aspx ). I don't see how it would hurt for us to add more numbers to the table - create a BZ for it."
Comment 1 Heiko W. Rupp 2013-06-05 13:26:26 EDT
Removing the NUMBERS table from the subject as this is no longer 
available/needed for the cassandra backend
Comment 2 Heiko W. Rupp 2013-06-06 07:10:27 EDT
master bfa32dc

This was now possible, as there is no more dependency on the RHQ_NUMBERS table.

In the Cassandra case, we already computed the buckets in memory, so this change more or less just passes a different number of buckets into this generator.
Default is still 60 in cases where no different number is given.
Comment 3 Heiko W. Rupp 2013-09-11 05:52:56 EDT
Bulk closing of old issues now that HRQ 4.9 is in front of the door.

If you think the issue has not been solved, then please open a new bug and mention this one in the description.

Note You need to log in before you can comment on or make changes to this bug.