Bug 1102651

Summary: High shared memory being reported on hypervisor
Product: Red Hat Enterprise Virtualization Manager Reporter: Chris Pelland <cpelland>
Component: vdsmAssignee: Martin Sivák <msivak>
Status: CLOSED ERRATA QA Contact: Nikolai Sednev <nsednev>
Severity: high Docs Contact:
Priority: high    
Version: 3.3.0CC: asegundo, asegurap, bazulay, cpelland, danken, dfediuck, eedri, fdeutsch, iheim, knesenko, lbopf, lpeer, lyarwood, michal.skrivanek, msivak, scohen, sherold, wdaniel, yeylon
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: 3.3.4   
Hardware: x86_64   
OS: Linux   
Whiteboard: sla
Fixed In Version: vdsm-4.13.2-0.18.el6ev Doc Type: Bug Fix
Doc Text:
Previously, hypervisors reported what appeared to be very high shared memory usage when memory page sharing wasn't enabled. This happened because MOM reports the ksm_pages_sharing value in raw pages, while both VDSM and Manager expect megabytes. Now, memShared units reported from MOM undergo the necessary conversion to display in VDSM and Manager in MiB.
Story Points: ---
Clone Of: 1072030 Environment:
Last Closed: 2014-07-09 11:51:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1072030    
Bug Blocks: 1102650    

Comment 2 Nikolai Sednev 2014-07-01 12:14:33 UTC
I need to know which rhev-hypervisor package should be used, so it contains vdsm-4.13.2-0.18.el6ev inside?

Comment 3 Kiril Nesenko 2014-07-02 07:27:51 UTC
(In reply to Nikolai Sednev from comment #2)
> I need to know which rhev-hypervisor package should be used, so it contains
> vdsm-4.13.2-0.18.el6ev inside?

See build id - is37. Take it from there.

- Kiril

Comment 4 Eyal Edri 2014-07-02 07:52:33 UTC
fabian, last we talked you said we're not building rhev-h for 3.3.z anymore.
can you advise how this bug should be verified ? with 3.4 rhev-h?

Comment 5 Fabian Deutsch 2014-07-02 07:54:48 UTC
If the vdsm patch is also available in the 3.4 version then I expect that you can test it with the latest 3.4 RHEV-H build.

Comment 6 Nikolai Sednev 2014-07-02 09:45:12 UTC
Hi Martin,
Please provide me with exact rhevh package number to be used for the verification of the bug, as vdsm-4.13.2-0.18.el6ev is not contained within  rhev-hypervisor6-6.5-20140603.1.el6ev

Comment 7 Fabian Deutsch 2014-07-02 10:22:22 UTC
Another way to verify this is to verify this on a "fat" RHEL host with the required vdsm version.

The verification should not be tied to RHEV-H as it's a vdsm bug.

Comment 9 Martin Sivák 2014-07-02 13:35:14 UTC
This was merged to 3.5, 3.4 and downstream only 3.3 (vdsm-4.13.2-0.18). I suppose you should be able to take the proper rpm and install it somewhere.

I have no idea what products and isos got the builds.

Comment 10 Nikolai Sednev 2014-07-02 14:25:27 UTC
Failed to reproduce-works for me.

Setup consists of 2 hosts, one RHEL6.5 and RHEVH6.5, 4 VMs cloned from RHEL7.0 guest and all 4 are running, I added RHEVH to host cluster and didn't reproduced the bug with or without all 4 VMs migration.


Tested on RHEVM: rhevm-3.3.4-0.53.el6ev.noarch ; RHEL6.5 and RHEVH.

RHEVH:
vdsm-4.14.7-3.el6ev.x86_64
Red Hat Enterprise Virtualization Hypervisor release 6.5 (20140603.2.el6ev)
sanlock-2.8-1.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6_5.10.x86_64
libvirt-0.10.2-29.el6_5.8.x86_64
vdsm-4.14.7-3.el6ev.x86_64

RHEL6.5:
sanlock-2.8-1.el6.x86_64
libvirt-0.10.2-29.el6_5.9.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6_5.11.x86_64
vdsm-4.14.7-4.el6ev.x86_64

[root@alma03 ~]# vdsClient -s 0 getVdsStats | grep mem
        memAvailable = 29958
        memCommitted = 1089
        memFree = 30307
        memShared = 0
        memUsed = '6'

[root@lilach-vdsb ~]# vdsClient -s 0 getVdsStats | grep mem
        memAvailable = 2728
        memCommitted = 4356
        memFree = 5506
        memShared = 0
        memUsed = '29'

Comment 12 errata-xmlrpc 2014-07-09 11:51:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0864.html