Created attachment 1740003 [details] engine.log Description of problem:On certain hosts (the lscpu and numactl output below) create VM by auto_pinning_policy=adjust fails with ERROR [org.ovirt.engine.core.bll.AddVmCommand] (default task-20) [] Exception: java.lang.ArrayIndexOutOfBoundsException: Index 7 out of bounds for length 6 Version-Release number of selected component (if applicable): ovirt-engine-4.4.4.4-0.9.el8ev.noarch How reproducible: 100% Steps to Reproduce: 1.POST https://{{host}}/ovirt-engine/api/vms?auto_pinning_policy=adjust <vm> <name>auto_cpu_vm_adjust_policy</name> <template> <name>latest-rhel-guest-image-8.3-infra</name> </template> <cluster> <name>golden_env_mixed_1</name> </cluster> <placement_policy> <hosts> <host> <name>host_mixed_1</name> </host> </hosts> </placement_policy> </vm> to the host: lscpu: [root@lynx22 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Thread(s) per core: 1 Core(s) per socket: 6 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz Stepping: 2 CPU MHz: 1366.578 CPU max MHz: 1900.0000 CPU min MHz: 1200.0000 BogoMIPS: 3799.89 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 15360K NUMA node0 CPU(s): 0,2,4,6,8,10 NUMA node1 CPU(s): 1,3,5,7,9,11 numactl -H available: 2 nodes (0-1) node 0 cpus: 0 2 4 6 8 10 node 0 size: 15645 MB node 0 free: 12097 MB node 1 cpus: 1 3 5 7 9 11 node 1 size: 15962 MB node 1 free: 5834 MB node distances: node 0 1 0: 10 21 1: 21 10 Actual results: fails with ERROR [org.ovirt.engine.core.bll.AddVmCommand] (default task-20) [] Exception: java.lang.ArrayIndexOutOfBoundsException: Index 7 out of bounds for length 6 Expected results: VM created Additional info:
It might happen that the hardware is not entirely suitable to the VM vCPUs. We will have vCPU pinning anyway to try and fit the current situation. After looking again, we had only 1 thread on the host and with the current topology we should have: 0#2_1#4_2#6_3#8_4#10_5#3_6#5_7#7_8#9_9#11
verified on ovirt-engine-4.4.5-0.11.el8ev.noarch
This bugzilla is included in oVirt 4.4.5 release, published on March 18th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.