Bug 1283649

Summary: VMs main tab: Memory sparkline chart alters between real memory consumption and 99%
Product: [oVirt] ovirt-engine Reporter: Karel Benes <karel.benes>
Component: Frontend.WebAdminAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact: Pavel Stehlik <pstehlik>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.6.0CC: bugs, karel.benes, mgoldboi, michal.skrivanek, tjelinek, ykaul
Target Milestone: ovirt-4.0.0-betaFlags: michal.skrivanek: ovirt-4.0.0?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-20 08:39:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
VMs memory chart
none
Flapping VMs memory chart
none
Engine logs and VDSM logs none

Description Karel Benes 2015-11-19 14:03:15 UTC
Description of problem:

At the dashboard is often Memory chart alters between real memory consumption and 99%

Version-Release number of selected component (if applicable):

oVirt Engine Version: 3.6.0.3-1.el7.centos

How reproducible:


Steps to Reproduce:
1. Shown at screenshot

Actual results:

unreal value 99% at chart. Curve graph does not correspond with the real memory allocation.

Expected results:

The percentage corresponding to the memory consumption

Additional info:

It looks like visualisation bug.

Comment 1 Yaniv Kaul 2015-11-20 08:08:42 UTC
Can you please attach engine and VDSM logs?

Comment 2 Einav Cohen 2015-11-20 12:58:47 UTC
(In reply to Yaniv Kaul from comment #1)
> Can you please attach engine and VDSM logs?

and screen-shot please (the previously attached screen-shot seem to have been deleted). thanks.

Comment 3 Karel Benes 2015-11-20 13:07:31 UTC
Created attachment 1097190 [details]
Flapping VMs memory chart

Here is an oVirt engine screenshot

Comment 4 Karel Benes 2015-11-20 13:11:22 UTC
Created attachment 1097191 [details]
Engine logs and VDSM logs

Here are attached Engine logs and VDSM logs

Comment 5 Michal Skrivanek 2016-01-15 13:29:54 UTC
It may not necessarily be unreal. Do you use memory ballooning?

Comment 6 Red Hat Bugzilla Rules Engine 2016-01-15 13:29:56 UTC
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.

Comment 7 Karel Benes 2016-01-29 13:59:30 UTC
Ballooning? Altogether yes.

Comment 8 Michal Skrivanek 2016-01-29 14:46:51 UTC
we would need at least one example where it doesn't correspond to what you see in the guest. 
High overcommit and a lot of dynamic changes going on these might be the reason...and those would be correct values


try to reproduce without a balloon device in VM (then the guest "sees" the same amount of available RAM all the time, there is no backpressure when host runs out of RAM)

Comment 9 Sandro Bonazzola 2016-05-02 10:03:32 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 10 Moran Goldboim 2016-05-19 08:12:05 UTC
(In reply to Michal Skrivanek from comment #8)
> we would need at least one example where it doesn't correspond to what you
> see in the guest. 
> High overcommit and a lot of dynamic changes going on these might be the
> reason...and those would be correct values
> 
> 
> try to reproduce without a balloon device in VM (then the guest "sees" the
> same amount of available RAM all the time, there is no backpressure when
> host runs out of RAM)

Karel, can you please answer comment 8, is it still happening.
we would appreciate your feedback here before closing this bug.

Thanks.

Comment 11 Karel Benes 2016-05-19 08:28:31 UTC
Actual vesion oVirt Engine Version: 3.6.5.3 looks OK.

Comment 12 Tomas Jelinek 2016-05-20 08:39:59 UTC
Thank you Karel for the feedback! Closing as fixed in current release.