Bug 973924 - Failed power-on VM with multiple disks - Changed state to Down: internal error No more available PCI addresses
Failed power-on VM with multiple disks - Changed state to Down: internal erro...
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
3.2.0
x86_64 Linux
unspecified Severity medium
: ---
: 3.3.0
Assigned To: Michal Skrivanek
vvyazmin@redhat.com
virt
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-13 02:49 EDT by vvyazmin@redhat.com
Modified: 2015-09-22 09 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-06-14 04:58:45 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
## Logs rhevm, vdsm, libvirt (1.76 MB, application/x-gzip)
2013-06-13 02:49 EDT, vvyazmin@redhat.com
no flags Details

  None (edit)
Description vvyazmin@redhat.com 2013-06-13 02:49:19 EDT
Created attachment 760494 [details]
## Logs rhevm, vdsm, libvirt

Description of problem:
Failed power-on VM with multiple disks - Changed state to Down: internal error No more available PCI addresses

Version-Release number of selected component (if applicable):
RHEVM 3.2 - SF17.5 environment: 

RHEVM: rhevm-3.2.0-11.30.el6ev.noarch 
VDSM: vdsm-4.10.2-22.0.el6ev.x86_64 
LIBVIRT: libvirt-0.10.2-18.el6_4.5.x86_64 
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.355.el6_4.5.x86_64 
SANLOCK: sanlock-2.6-2.el6.x86_64
PythonSDK: rhevm-sdk-3.2.0.11-1.el6ev.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create VM with 26 disks with 'VirtIO' interface  (this is maximum number of PCI devises, that can support by RHEVM)
2. Try power on this VM
  
Actual results:
Failed power on

Expected results:
In UI get pop up, that explain that this action can be done, and block run this action.
You can create VM with Maximum is 26 PCI devises but you can power on only with 25 PCI devises.

Impact on user:
Failed power on VM.

Workaround:
Power on VM with 25 PCI devises.

Additional info:

/var/log/ovirt-engine/engine.log
2013-06-12 19:16:44,686 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-100) [328b0465] START, DestroyVDSCommand(HostName = tigris03.scl.lab.tlv.redhat.com, HostId = bf4130f9-32b6-4ab7-af63-83e99ea46e87, vmId=da5b8350-3e0a-4b1f-aa1e-ac31fcdfaafb, force=false, secondsToWait=0, gracefully=false), log id: 113ef30a
2013-06-12 19:17:01,578 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-100) [328b0465] FINISH, DestroyVDSCommand, log id: 113ef30a
2013-06-12 19:17:01,578 WARN  [org.ovirt.engine.core.compat.backendcompat.PropertyInfo] (QuartzScheduler_Worker-100) Unable to get value of property: glusterVolume for class org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase
2013-06-12 19:17:01,579 WARN  [org.ovirt.engine.core.compat.backendcompat.PropertyInfo] (QuartzScheduler_Worker-100) Unable to get value of property: vds for class org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase
2013-06-12 19:17:01,595 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-100) [328b0465] Running on vds during rerun failed vm: null
2013-06-12 19:17:01,596 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-100) [328b0465] vm LSM_vm_000 running in db and not running in vds - add to rerun treatment. vds tigris03.scl.lab.tlv.redhat.com
2013-06-12 19:17:01,607 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-100) [328b0465] Rerun vm da5b8350-3e0a-4b1f-aa1e-ac31fcdfaafb. Called from vds tigris03.scl.lab.tlv.redhat.com
2013-06-12 19:17:01,612 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-4-thread-48) [328b0465] START, UpdateVdsDynamicDataVDSCommand(HostName = tigris03.scl.lab.tlv.redhat.com, HostId = bf4130f9-32b6-4ab7-af63-83e99ea46e87, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@16f88c89), log id: 722015e4
2013-06-12 19:17:01,619 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-4-thread-48) [328b0465] FINISH, UpdateVdsDynamicDataVDSCommand, log id: 722015e4
2013-06-12 19:17:01,619 WARN  [org.ovirt.engine.core.compat.backendcompat.PropertyInfo] (pool-4-thread-48) Unable to get value of property: glusterVolume for class org.ovirt.engine.core.bll.RunVmCommand
2013-06-12 19:17:01,619 WARN  [org.ovirt.engine.core.compat.backendcompat.PropertyInfo] (pool-4-thread-48) Unable to get value of property: vds for class org.ovirt.engine.core.bll.RunVmCommand
2013-06-12 19:17:01,643 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-4-thread-48) [328b0465] Lock Acquired to object EngineLock [exclusiveLocks= key: da5b8350-3e0a-4b1f-aa1e-ac31fcdfaafb value: VM


/var/log/vdsm/vdsm.log

Thread-32072::DEBUG::2013-06-12 17:17:47,894::vm::1092::vm.Vm::(setDownStatus) vmId=`da5b8350-3e0a-4b1f-aa1e-ac31fcdfaafb`::Changed state to Down: internal error No more available PCI addresses
Comment 1 Michal Skrivanek 2013-06-14 04:58:45 EDT
xzgrep -A197 '<devices>' home/vvyazmin/logs/Error/vdsm.log.1.xz|grep device=|wc -l
      27

you are powering on with 27 devices. There is 1 cdrom attached and 26 disks

Note You need to log in before you can comment on or make changes to this bug.