Bug 1257798 - Clusters Capacity report not working correctly
Summary: Clusters Capacity report not working correctly
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-reports
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Shirly Radco
QA Contact: Karolína Hajná
URL:
Whiteboard: infra
Depends On:
Blocks: 1112217
TreeView+ depends on / blocked
 
Reported: 2015-08-28 05:53 UTC by Karolína Hajná
Modified: 2016-02-10 19:16 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1257797
Environment:
Last Closed: 2015-09-01 08:10:12 UTC
oVirt Team: Infra
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Karolína Hajná 2015-08-28 05:53:50 UTC
+++ This bug was initially created as a clone of Bug #1257797 +++

Description of problem:
Behaviour of this report is quite unpredictable. I added 1 - 3 hosts to my engine. Each has 4 CPU cores and 4 678 MB of RAM. After adding first host the report showed that the host has 0 CPUs and 0 RAM. I tried to refresh and relog few times, still the same result. Then the values changed suddenly to 16 CPUs (while the host has 4) and 4 GB of RAM. I would understand that it takes some time to load the new data but in that case there shouldn't be even the correct number of hosts. Adding the other two hosts confirmed 16 CPUs per host. RAM value is correct but it might be better to have one decimal digit shown as well since the value is chopped off to whole GBs. 

When I added two VMs, one running and one turned of, disk section was showing correct data while other sections were not showing any VMs at all. Data in these sections showed after adding a host to new cluster (but still incorrect in some cases). Also there were no VMs on the new host but when I used "Display by cluster" option, it showed number of VMs data from the first cluster.


Version-Release number of selected component (if applicable):
3.6.0-0.12.master.el6

How reproducible:
100%

Actual results:
Chaotic data in report

Expected results:
Correct data in report

Comment 1 Yaniv Lavi 2015-09-01 08:10:12 UTC
No need to clone bugs to downstream, we will track this issue in BZ #1257797.

Comment 2 Lukas Svaty 2016-01-05 07:27:39 UTC
As this bug is closed and we have upstream bug for reports. Do you want to block BZ#1112217 (RFE where this happend) by BZ#1132487 (upstream clone of this) ?

Isn't it better to leave this bug open and close the upstream bug as the bug is on customer related feature?

Comment 3 Lukas Svaty 2016-01-05 07:29:47 UTC
Too quick on copy paste, sorry

Correct bug ids:
RFE (downstream) https://bugzilla.redhat.com/show_bug.cgi?id=1112217
BUG (Upstream) https://bugzilla.redhat.com/show_bug.cgi?id=1257797

Comment 4 Yaniv Lavi 2016-01-05 11:39:47 UTC
(In reply to Lukas Svaty from comment #2)
> As this bug is closed and we have upstream bug for reports. Do you want to
> block BZ#1112217 (RFE where this happend) by BZ#1132487 (upstream clone of
> this) ?
> 
> Isn't it better to leave this bug open and close the upstream bug as the bug
> is on customer related feature?

We can block downstream RFE with upstream bug. I'll do that

Comment 5 Yaniv Lavi 2016-01-05 11:42:23 UTC
(In reply to Yaniv Dary from comment #4)
> (In reply to Lukas Svaty from comment #2)
> > As this bug is closed and we have upstream bug for reports. Do you want to
> > block BZ#1112217 (RFE where this happend) by BZ#1132487 (upstream clone of
> > this) ?
> > 
> > Isn't it better to leave this bug open and close the upstream bug as the bug
> > is on customer related feature?
> 
> We can block downstream RFE with upstream bug. I'll do that

Sorry, but the state was correct:
Downstream RFE depends on upstream RFE that depends on bugs. It's not a issue and valid.


Note You need to log in before you can comment on or make changes to this bug.