Bug 1481246

Summary: VM does not consume any hugepages on a host
Product: [oVirt] ovirt-engine Reporter: Artyom <alukiano>
Component: BLL.VirtAssignee: Francesco Romani <fromani>
Status: CLOSED CURRENTRELEASE QA Contact: Artyom <alukiano>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.2.0CC: ahadas, alukiano, bugs, mavital, tjelinek
Target Milestone: ovirt-4.2.0Keywords: Triaged
Target Release: ---Flags: rule-engine: ovirt-4.2+
alukiano: testing_plan_complete-
rule-engine: planning_ack+
tjelinek: devel_ack+
mavital: testing_ack+
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-20 11:13:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1457239, 1461476    
Attachments:
Description Flags
engine log
none
vdsm log none

Description Artyom 2017-08-14 12:32:35 UTC
Created attachment 1313093 [details]
engine log

Description of problem:
VM does not consume any hugepages on a host

Version-Release number of selected component (if applicable):
ovirt-engine-4.2.0-0.0.master.20170811144920.gita423008.el7.centos.noarch

How reproducible:
Always

Steps to Reproduce:
1. Configure hugepages on the host
HugePages_Total:    2048
HugePages_Free:     2048
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
2. Create VM with 2 GB of memory and add custom property hugepages with value 2048 to it
3. Start VM

Actual results:
VM does not have any hugepage property under dumpxml
# virsh -r dumpxml 2 | grep -i huge
and so it does not consume any hugepages on the host 

Expected results:
VM has hugepages property and it consumes hugepages on the host

Additional info:

Comment 1 Michal Skrivanek 2017-08-14 12:50:43 UTC
please attach vdsm log as well, specify time frame and name of the VM as well

Comment 2 Artyom 2017-08-14 13:04:08 UTC
Created attachment 1313103 [details]
vdsm log

You can start looking from 
2017-08-14 15:30:22,607+03 INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-18) [bef6940e-242f-4588-ac22-5125535808ca] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='ed843075-f8d0-4f1b-86f4-326f8de72275', vmId='0dc2a097-0347-4cbc-af10-a6b72ef69018', vm='VM [golden_env_mixed_virtio_0]'}), log id: 5714e3a2

Comment 3 Tomas Jelinek 2017-09-04 07:34:11 UTC
seems like another regression of libvitxml

Comment 4 Francesco Romani 2017-09-11 08:46:39 UTC
(In reply to Tomas Jelinek from comment #3)
> seems like another regression of libvitxml

Yes.
The Vdsm side should be fixed now with https://gerrit.ovirt.org/#/c/81610/ .
The bug is that Vdsm doesn't do the hugepages preparation in the new engine XML flow; the cleanup stage should be OK.

To fix this BZ, we need to make sure that Engine works well: it should
1. send the custom properties encoded in the domain metadata (same values as before, let's just put them in the metadata)
2. do the same additions to the domain XML that Vdsm used to.

CC Arik to make sure he's aware

Comment 5 Francesco Romani 2017-09-22 16:27:24 UTC
Vdsm patch merged, but Engine needs to send the custom properties in the domain XML metadata, once that is done we can move to MODIFIED.

Comment 6 Arik 2017-09-22 17:58:20 UTC
Engine side is merged as well

Comment 7 Francesco Romani 2017-09-25 11:53:08 UTC
regression caused by domain xml. Doesn't need doc string.

Comment 8 Artyom 2017-09-26 08:37:33 UTC
Verified on ovirt-engine-4.2.0-0.0.master.20170921184504.gitfcfc9a7.el7.centos.noarch

Comment 9 Sandro Bonazzola 2017-12-20 11:13:10 UTC
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.