Bug 876127 - [Scalability] Ovirt-engine-backend: There's no limitation to the number of Direct LUNs that can be attached to a VM. When we exceed 28 Disks, VM fails to run due to lack of PCI Addresses.
Summary: [Scalability] Ovirt-engine-backend: There's no limitation to the number of Di...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.1.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 3.2.0
Assignee: Roy Golan
QA Contact: Leonid Natapov
URL:
Whiteboard: virt
Depends On:
Blocks: 915537
TreeView+ depends on / blocked
 
Reported: 2012-11-13 12:12 UTC by Omri Hochman
Modified: 2013-06-11 09:39 UTC (History)
8 users (show)

Fixed In Version: sf2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-06-11 09:13:11 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
engine.log (311.39 KB, application/octet-stream)
2012-11-13 12:12 UTC, Omri Hochman
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 9409 0 None MERGED core: validate max lun disk attachment 2020-11-19 11:33:37 UTC

Description Omri Hochman 2012-11-13 12:12:54 UTC
Created attachment 644073 [details]
engine.log

Ovirt-engine-backend: There's no limitation for the number of Direct LUNs that can be attached to a VM. When we exceed 28 Disks, VM fails to run due to lack of PCI Addresses.

Scenario:
**********
1) Attempt to Add more then 28 Direct LUNs to one VM.
2) Attempt to Run the VM (that has more than 28 Direct LUNs)

Results:
*********
A) There's no limitation for  number of Direct LUNs that can be attached to a VM (the 29th LUN will be attach successfully)
B) When attempting to Run Vm after adding more than 28 Direct LUNs to it, Engine will RunVmCommand but VM will fail to run due to "Recieved a memballoon Device without an address when processing" and will attempt to rerun on different hosts. 

Engine.log
************
2012-11-13 09:12:23,805 INFO  [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-4-thread-37) [4d757e46] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id
: 259f7c8a
2012-11-13 09:12:23,805 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (pool-4-thread-37) [4d757e46] Lock freed to object EngineLock [exclusiveLocks= key: de580dd1-a977-4a9e-9016-d06689e5305b value: VM
, sharedLocks= ]
2012-11-13 09:12:27,328 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (QuartzScheduler_Worker-61) START, FullListVdsCommand( HostId = 0cea1dea-2c00-11e2-a76b-441ea17336ee, vds=null, vmIds=[de580dd1-a977-4a9e-9016-d06689e5305b]), log id: 23b6807d
2012-11-13 09:12:27,348 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (QuartzScheduler_Worker-61) FINISH, FullListVdsCommand, return: [Lorg.ovirt.engine.core.vdsbroker.xmlrpc.XmlRpcStruct;@22ada0dc, log id: 23b6807d
2012-11-13 09:12:27,357 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-61) Recieved a qxl Device without an address when processing VM de580dd1-a977-4a9e-9016-d06689e5305b devices, skipping device: {specParams={vram=65536}, device=qxl, type=video, deviceId=536cb80c-b68b-4aea-abd4-07aa18d566bb}
2012-11-13 09:12:27,357 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-61) Recieved a cdrom Device without an address when processing VM de580dd1-a977-4a9e-9016-d06689e5305b devices, skipping device: {shared=false, iface=ide, index=2, specParams={path=}, device=cdrom, path=, type=disk, readonly=true, deviceId=a1c4e09b-4ce6-40ec-be96-4defe89d4974}
2012-11-13 09:12:27,357 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-61) Recieved a disk Device without an address when processing VM de580dd1-a977-4a9e-9016-d06689e5305b devices, skipping device: {shared=false, index=0, GUID=360060160f4a0300072ae24a38d2de211, propagateErrors=off, format=raw, type=disk, iface=virtio, bootOrder=1, specParams={}, optional=false, device=disk, path=/dev/mapper/360060160f4a0300072ae24a38d2de211, readonly=false, deviceId=01e2addc-8387-4cb8-a6f4-087cd5021729}
2012-11-13 09:12:27,357 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-61) Recieved a disk Device without an address when processing VM de580dd1-a977-4a9e-9016-d06689e5305b devices, skipping device: {shared=false, index=1, GUID=360060160f4a0300070ae24a38d2de211, propagateErrors=off, format=raw, 
..
..
..
..
2012-11-13 09:12:27,359 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-61) Recieved a memballoon Device without an address when processing VM de580dd1-a977-4a9e-9016-d06689e5305b devices, skipping device: {specParams={model=virtio}, device=memballoon, type=balloon, deviceId=1b70f7b1-dd31-4703-862a-e5c9e288a0a7}

Comment 1 Roy Golan 2012-11-22 06:33:41 UTC
is there a limit of addresses the engine can be aware of?

Comment 2 Roy Golan 2012-11-22 07:34:00 UTC
VMCommand has this hard coded but it isn't being used for lun disks.

from VmCommand.java

 // 26 PCI slots: 31 total minus 5 saved for qemu (Host Bridge, ISA Bridge,
    // IDE, Agent, ACPI)
    private final static int MAX_PCI_SLOTS = 26;
    // 3 IDE slots: 4 total minus 1 for CD
    private final static int MAX_IDE_SLOTS = 3;

Comment 3 Roy Golan 2012-11-22 10:22:44 UTC
http://gerrit.ovirt.org/#/c/9409/

Comment 4 Michal Skrivanek 2012-11-22 11:18:08 UTC
where is the limit coming from anyway?

Comment 5 Roy Golan 2012-11-22 13:13:16 UTC
I think that the pci standard says 32. I tried to grep qemu's code but couldn't find a number.

Comment 6 Yaniv Kaul 2012-11-22 13:18:42 UTC
(In reply to comment #5)
> I think that the pci standard says 32. I tried to grep qemu's code but
> couldn't find a number.

hw/pci.h:#define PCI_SLOT_MAX            32

Comment 7 Roy Golan 2012-11-25 06:27:25 UTC
(In reply to comment #6)
> (In reply to comment #5)
> > I think that the pci standard says 32. I tried to grep qemu's code but
> > couldn't find a number.
> 
> hw/pci.h:#define PCI_SLOT_MAX            32

couldn't find it in:
http://git.qemu.org/qemu.git?p=qemu.git&a=search&h=HEAD&st=grep&s=32

did you fetch the repo for that?

Comment 9 Leonid Natapov 2013-02-24 08:57:58 UTC
sf7. fixed. The check for the limit of PCI addresses was added when adding a lun
disk. Cannot add Virtual Machine Disk. Maximum PCI devices exceeded.

Comment 10 Itamar Heim 2013-06-11 09:13:11 UTC
3.2 has been released

Comment 11 Itamar Heim 2013-06-11 09:39:50 UTC
3.2 has been released


Note You need to log in before you can comment on or make changes to this bug.