Bug 974917
Summary: | Can't launch VM with 2048 GB memory but 2047 GB is OK. | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Hajime Taira <htaira> | ||||||||
Component: | ovirt-engine | Assignee: | Martin Sivák <msivak> | ||||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Tareq Alayan <talayan> | ||||||||
Severity: | unspecified | Docs Contact: | |||||||||
Priority: | unspecified | ||||||||||
Version: | 3.2.0 | CC: | acathrow, cpelland, danken, dfediuck, htaira, iheim, jkt, lpeer, michal.skrivanek, msivak, pstehlik, Rhev-m-bugs, yeylon | ||||||||
Target Milestone: | --- | Keywords: | ZStream | ||||||||
Target Release: | 3.3.0 | ||||||||||
Hardware: | x86_64 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | sla | ||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | |||||||||||
: | 985973 (view as bug list) | Environment: | |||||||||
Last Closed: | 2014-01-21 22:14:20 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | |||||||||||
Bug Blocks: | 985973 | ||||||||||
Attachments: |
|
Description
Hajime Taira
2013-06-17 06:42:25 UTC
If we can't fix this issue before rhev-3.3. Please cap limitation of memory size by rhevm-web admin and user portal for work around on rhev-3.2.z. Would you attach the vmCreate call (and return) from vdsm.log? I would like to understand where we report/expect memory size in KiB, and overflow the 31 bit limitation of xmlrpc integers. (we have a worse issue with setting/reporting balloon size, where we use integer with bytes as units: balloon can grow no bigger than 2GiB.) (In reply to Dan Kenigsberg from comment #2) > Would you attach the vmCreate call (and return) from vdsm.log? I would like > to understand where we report/expect memory size in KiB, and overflow the 31 > bit limitation of xmlrpc integers. > > (we have a worse issue with setting/reporting balloon size, where we use > integer with bytes as units: balloon can grow no bigger than 2GiB.) Thanks Dan. Since the balloon wasn't in use so far, we can handle it without backwards compatibility issues. Created attachment 763621 [details]
packed log file about engine.log and vdsm.log
Hi, Dan
Please confirm attached log files.
At first, I set memory size 2097151 MB RHEL64-2TB(84178847-54ac-4ea0-adc3-65ec16fe70a0).
When RHEL64-2TB is boot successful.
At second, I change memory size 2097152 MB to RHEL64-2TB(84178847-54ac-4ea0-adc3-65ec16fe70a0).
When RHEL64-2TB is boot fail. But qemu-kvm process is running on RHEV-H.
Finally, I killed qemu-kvm process manually.
Oh it seems that the balloon issue *is* our issue: vmCreate succeeds, however following calls to getVmStats returns a balloon size that is too big to digest Thread-195703::DEBUG::2013-06-21 01:23:01,049::BindingXMLRPC::920::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Powering up', 'username': 'Unknown', 'memUsage': '0', 'acpiEnable': 'true', 'pid': '23104', 'displayIp': '0', 'displayPort': u'5901', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '-2', 'hash': '8092988585526967060', 'balloonInfo': {'balloon_max': 2147483648, 'balloon_cur': 2147483648}, 'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable': 'true', 'network': {u'vnet1': {'macAddr': '00:1a:4a:a8:7a:09', 'rxDropped': '0', 'rxErrors': '0', 'txDropped': '0', 'txRate': '0.0', 'rxRate': '0.0', 'txErrors': '0', 'state': 'unknown', 'speed': '1000', 'name': u'vnet1'}}, 'vmId': '84178847-54ac-4ea0-adc3-65ec16fe70a0', 'displayType': 'vnc', 'cpuUser': '42.34', 'disks': {u'vda': {'truesize': '107374182400', 'apparentsize': '107374182400', 'imageID': 'dc55cddb-84cb-4c0f-9095-ca244df6ddde'}, u'hdc': {'truesize': '0', 'apparentsize': '0'}}, 'monitorResponse': '0', 'statsAge': '1.06', 'elapsedTime': '55', 'vmType': 'kvm', 'cpuSys': '57.51', 'appsList': [], 'guestIPs': ''}]} A quick work around would be to remove the balloon, which is accessible via the REST API: <memory_policy> <guaranteed>1610612736</guaranteed> <ballooning>false</ballooning> </memory_policy> Simply set it to false as in this sample. Created attachment 764401 [details]
Update ovirt mempolicy script
I updated mempolicy of VM (RHEL64-2TB) when I can launch it.
Please confirm following log.
2013-06-24 19:58:15,138 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp-/127.0.0.1:8702-11) START, IsVmDuringInitiati
ngVDSCommand( vmId = 84178847-54ac-4ea0-adc3-65ec16fe70a0), log id: 34619876
2013-06-24 19:58:15,138 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp-/127.0.0.1:8702-11) FINISH, IsVmDuringInitiat
ingVDSCommand, return: false, log id: 34619876
2013-06-24 19:58:15,180 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-4-thread-50) [3811d911] Running command: RunVmCommand internal: fals
e. Entities affected : ID: 84178847-54ac-4ea0-adc3-65ec16fe70a0 Type: VM
2013-06-24 19:58:15,247 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-4-thread-50) [3811d911] START, IsoPrefixVDSCo
mmand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 3fb22615
2013-06-24 19:58:15,247 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-4-thread-50) [3811d911] FINISH, IsoPrefixVDSC
ommand, return: /rhev/data-center/mnt/rhevm32.hp.cpc:_srv_nfs_iso/4921cd96-781a-43fa-a279-4c945904312e/images/11111111-1111-1111-1111-11111111111
1, log id: 3fb22615
2013-06-24 19:58:15,269 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-4-thread-50) [3811d911] START, CreateVmVDSCommand(HostNa
me = cpc-dl980g7.hp.cpc, HostId = baec873f-dc2f-44a2-ac3c-021100675f8b, vmId=84178847-54ac-4ea0-adc3-65ec16fe70a0, vm=VM [RHEL64-2TB]), log id: 4
8e24d5e
2013-06-24 19:58:15,288 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-4-thread-50) [3811d911] START, CreateVDSCommand(
HostName = cpc-dl980g7.hp.cpc, HostId = baec873f-dc2f-44a2-ac3c-021100675f8b, vmId=84178847-54ac-4ea0-adc3-65ec16fe70a0, vm=VM [RHEL64-2TB]), log
id: 7900ac9a
2013-06-24 19:58:15,357 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-4-thread-50) [3811d911] org.ovirt.engine.core.vd
sbroker.vdsbroker.CreateVDSCommand spiceSslCipherSuite=DEFAULT,memSize=2097152,kvmEnable=true,smp=40,vmType=kvm,emulatedMachine=rhel6.4.0,keyboar
dLayout=en-us,pitReinjection=false,nice=0,display=vnc,smartcardEnable=false,tabletEnable=true,smpCoresPerSocket=4,spiceSecureChannels=smain,sinpu
ts,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=-2,transparentHugePages=true,vmId=84178847-54ac-4ea0-adc3-65ec16fe70a0,devi
ces=[Ljava.util.Map;@4ae74e50,acpiEnable=true,vmName=RHEL64-2TB,cpuType=Westmere,custom={device_1ecc0c93-4bba-4676-8e1a-785e87bfbd4b=VmDevice {vm
Id=84178847-54ac-4ea0-adc3-65ec16fe70a0, deviceId=1ecc0c93-4bba-4676-8e1a-785e87bfbd4b, device=ide, type=controller, bootOrder=0, specParams={},
address={bus=0x00, domain=0x0000, type=pci, slot=0x01, function=0x1}, managed=false, plugged=true, readOnly=false, deviceAlias=ide0}, device_1ecc
0c93-4bba-4676-8e1a-785e87bfbd4bdevice_cc47705b-e514-4232-b120-457062072439=VmDevice {vmId=84178847-54ac-4ea0-adc3-65ec16fe70a0, deviceId=cc47705
b-e514-4232-b120-457062072439, device=virtio-serial, type=controller, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slo
t=0x04, function=0x0}, managed=false, plugged=true, readOnly=false, deviceAlias=virtio-serial0}, device_1ecc0c93-4bba-4676-8e1a-785e87bfbd4bdevic
e_cc47705b-e514-4232-b120-457062072439device_f4912582-8e6a-4381-84d7-9bbb0bf9842cdevice_14b9ef9d-c95b-463f-822b-717c44e6ff5b=VmDevice {vmId=84178
847-54ac-4ea0-adc3-65ec16fe70a0, deviceId=14b9ef9d-c95b-463f-822b-717c44e6ff5b, device=unix, type=channel, bootOrder=0, specParams={}, address={p
ort=2, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel1}, device_1ecc0c93-4bba-4676-8e
1a-785e87bfbd4bdevice_cc47705b-e514-4232-b120-457062072439device_f4912582-8e6a-4381-84d7-9bbb0bf9842c=VmDevice {vmId=84178847-54ac-4ea0-adc3-65ec
16fe70a0, deviceId=f4912582-8e6a-4381-84d7-9bbb0bf9842c, device=unix, type=channel, bootOrder=0, specParams={}, address={port=1, bus=0, controlle
r=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel0}}
2013-06-24 19:58:15,357 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-4-thread-50) [3811d911] FINISH, CreateVDSCommand
, log id: 7900ac9a
2013-06-24 19:58:15,362 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-4-thread-50) [3811d911] IncreasePendingVms::CreateVmIncr
easing vds cpc-dl980g7.hp.cpc pending vcpu count, now 40. Vm: RHEL64-2TB
2013-06-24 19:58:15,365 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-4-thread-50) [3811d911] FINISH, CreateVmVDSCommand, retu
rn: WaitForLaunch, log id: 48e24d5e
2013-06-24 19:58:15,365 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-4-thread-50) [3811d911] Lock freed to object EngineLock [exclusiveLo
cks= key: 84178847-54ac-4ea0-adc3-65ec16fe70a0 value: VM
, sharedLocks= ]
2013-06-24 19:58:15,366 WARN [org.ovirt.engine.core.compat.backendcompat.PropertyInfo] (pool-4-thread-50) Unable to get value of property: glust
erVolume for class org.ovirt.engine.core.bll.RunVmCommand
2013-06-24 19:58:15,366 WARN [org.ovirt.engine.core.compat.backendcompat.PropertyInfo] (pool-4-thread-50) Unable to get value of property: vds f
or class org.ovirt.engine.core.bll.RunVmCommand
2013-06-24 19:58:35,398 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-28) [73388428] VM RHEL64-2TB 8417884
7-54ac-4ea0-adc3-65ec16fe70a0 moved from WaitForLaunch --> PoweringUp
2013-06-24 19:58:35,398 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (QuartzScheduler_Worker-28) [73388428] START, FullLi
stVdsCommand( HostId = baec873f-dc2f-44a2-ac3c-021100675f8b, vds=null, vmIds=[84178847-54ac-4ea0-adc3-65ec16fe70a0]), log id: ca57c25
2013-06-24 19:58:35,428 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (QuartzScheduler_Worker-28) [73388428] FINISH, FullL
istVdsCommand, return: [Lorg.ovirt.engine.core.vdsbroker.xmlrpc.XmlRpcStruct;@4479679f, log id: ca57c25
2013-06-24 19:58:35,430 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-28) [73388428] Received a memballoon
Device without an address when processing VM 84178847-54ac-4ea0-adc3-65ec16fe70a0 devices, skipping device: {specParams={model=none}, device=mem
balloon, type=balloon}
2013-06-24 19:58:35,461 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (QuartzScheduler_Worker-28) [73388428] START, Upda
teVdsDynamicDataVDSCommand(HostName = cpc-dl980g7.hp.cpc, HostId = baec873f-dc2f-44a2-ac3c-021100675f8b, vdsDynamic=org.ovirt.engine.core.common.
businessentities.VdsDynamic@99368aca), log id: 3a1e30d
2013-06-24 19:58:35,462 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (QuartzScheduler_Worker-28) [73388428] FINISH, UpdateVdsDynamicDataVDSCommand, log id: 3a1e30d
2013-06-24 19:59:15,760 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-56) VM RHEL64-2TB 84178847-54ac-4ea0-adc3-65ec16fe70a0 moved from PoweringUp --> Up
*** Bug 975945 has been marked as a duplicate of this bug. *** Created attachment 792704 [details]
vdsm.log - succeeded to run VM with 2TB memory
I tested it again with vdsm 4.10.2-25.0.el6ev.
It succeeded to run VM with 2TB memory on HP ProLiant DL980 G7 (4TB Phys Memory).
Please confirm log file.
And I tested currently released version vdsm-4.10.2-24.1.el6ev in same PoC environment. It's works fine. Moving to verified per comment 14. Closing - RHEV 3.3 Released Closing - RHEV 3.3 Released |