Description of problem: Creating VM with custom PCI address for VirtIO disks fails. Version-Release number of selected component (if applicable): CNV-1.3 How reproducible: Create a VMI via the vm-spec: spec: domain: devices: disks: - disk: bus: virtio pciAddress: 0000:04:10.0 name: registrydisk volumeName: registryvolume Steps to Reproduce: 1. Specify custom pciAddress for virtio disk as mentioned above. 2. 3. Actual results: VM creation fails with custom pciAddress for virtio disks, with the below message as seen from the "oc describe vmi <vm-name>" --- Status: Conditions: Message: server error. command Launcher.Sync failed: virError(Code=27, Domain=20, Message='XML error: Invalid PCI address 0000:04:10.0. slot must be <= 0') --- Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning SyncFailed 7s (x12 over 18s) virt-handler, cnv-executor-kbidarka-node2.example.com server error. command Launcher.Sync failed: virError(Code=27, Domain=20, Message='XML error: Invalid PCI address 0000:04:10.0. slot must be <= 0') --- Cannot provide/attach domain xml file from virt-launcher as it does not exist. Expected results: VM creation should work successfully with custom pciAddress for virtio disks. Additional info: 1) https://libvirt.org/formatdomain.html#elementsAddress 2) https://github.com/kubevirt/kubevirt/pull/1484
I do not consider this a bug. Libvirt clearly explains what constraints are on the addresses, and KubeVirt is just passing this message back to the user. And thsi is also happening today. To be clear: We try to avoid to replicate libvirts validation functionality inside of KubeVirt, thus instead we should imrpove to deliver these libvirt validation feedback to the user.
Agreed the symptoms may appear to consider this not a bug, but rather issue with Libvirt constrains. I raised this bug for the symptoms initially, while working with PCI Address for virtio disks, as no matter what values I used it failed always. LAter, after investigating the symptoms reported in this bug, it was realized the actual issue to be this, https://github.com/kubevirt/kubevirt/issues/1668#issuecomment-435895100 Now, this bug is required to track this particular issue. :)
Right. This bug is about removing the blacklisting of certain PCI addresses. Which is currently in place.
ok, I was testing with chipset "q35" and issue is with this , will proceed with testing this with "pc".
Vladik, was this addressed by one of the patches in that area?
(In reply to Fabian Deutsch from comment #9) > Vladik, was this addressed by one of the patches in that area? Fabian, I've backported the fix in https://github.com/kubevirt/kubevirt/pull/1674
oh, sorry, I was fixed in https://github.com/kubevirt/kubevirt/pull/1669
Okay this is in v0.10 and not backported, thus in 1.4 https://github.com/kubevirt/kubevirt/commit/391753ddffb7a770bb7372196c0860e08ec5c775
Created attachment 1520560 [details] Tested with custom PCI address with q35 "machine type".
This works fine now with machine type: "q35" VERIFIED With CNV-1.4
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:0417