Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2047271

Summary: [RHEL9] Libvirt can't start a guest if virtio-mem/virtio-pmem is on PCI bus != 0
Product: Red Hat Enterprise Linux 9 Reporter: Michal Privoznik <mprivozn>
Component: libvirtAssignee: Michal Privoznik <mprivozn>
libvirt sub component: General QA Contact: Jing Qi <jinqi>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: dhildenb, jdenemar, jinqi, jsuchane, lcheng, lcong, lmen, mprivozn, pkrempa, virt-maint, xuzhang, yanghliu
Version: 9.0Keywords: AutomationBackLog, Triaged, Upstream
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-8.0.0-4.el9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 2014487
: 2048435 2050702 (view as bug list) Environment:
Last Closed: 2022-05-17 12:46:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version: 8.1.0
Embargoed:
Bug Depends On:    
Bug Blocks: 2014487, 2047797    

Description Michal Privoznik 2022-01-27 13:46:52 UTC
+++ This bug was initially created as a clone of Bug #2014487 +++

If virtio-mem or virtio-pmem memory device is on a PCI bus different to the default pci.0, then starting such guest results in QEMU error:

error: internal error: qemu unexpectedly closed the monitor: 2022-01-27T13:44:29.462369Z qemu-system-x86_64: -device {"driver":"virtio-pmem-pci","memdev":"memvirtiopmem0","id":"virtiopmem0","bus":"pci.1","addr":"0xa"}: Bus 'pci.1' not found

Steps to reproduce:
1) add a virtio-mem/virtio-pmem device to config XML so that <address bus='0x1'/>
2) start the guest

--- Additional comment from Michal Privoznik on 2022-01-27 11:06:07 CET ---

(In reply to Jing Qi from comment #6)
> Verified with libvirt-8.0.0-1.el9.x86_64 & qemu-kvm-6.2.0-4.el9.x86_64 &
> kernel version 5.14.0-47.el9.x86_64 -

> So, can you please help to confirm if the attach virtio-mem device works as
> expected?

Yeah, the failure is not expected. But it looks like command line arguments ordering problem. I mean, when I configure virtio-mem to be on bus='0x01' the following cmd line is generated:

qemu-system-x86_64
-name guest=gentoo,debug-threads=on
-S
-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-gentoo/master-key.aes"}'
-machine pc-i440fx-7.0,usb=off,dump-guest-core=off \
...
-object '{"qom-type":"memory-backend-file","id":"memua-virtiomem","mem-path":"/hugepages2M/libvirt/qemu/1-gentoo","reserve":false,"size":4294967296}'
-device '{"driver":"virtio-mem-pci","node":0,"block-size":2097152,"memdev":"memua-virtiomem","prealloc":true,"id":"ua-virtiomem","bus":"pci.0","addr":"0x6"}'
-object '{"qom-type":"memory-backend-ram","id":"memua-virtiomem2","reserve":false,"size":4294967296}'
-device '{"driver":"virtio-mem-pci","node":0,"block-size":2097152,"memdev":"memua-virtiomem2","id":"ua-virtiomem2","bus":"pci.1","addr":"0x9"}'
...
-device '{"driver":"pci-bridge","chassis_nr":1,"id":"pci.1","bus":"pci.0","addr":"0x9"}'
-device '{"driver":"piix3-usb-uhci","id":"usb","bus":"pci.0","addr":"0x1.0x2"}'
-device '{"driver":"lsi","id":"scsi0","bus":"pci.0","addr":"0x5"}'
-device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.0","addr":"0x7"}'

Therefore, when QEMU starts up and see the first virtio-mem-pci device ("id":"ua-virtiomem") it will just create it and continue to the next one (ua-virtiomem2) where it sees "pci.1" bus which does not exist at that point yet. The bus is created (well would be) a few arguments later. Let me see if simple reorder fixes the problem (and think of all the implications).

Comment 1 Michal Privoznik 2022-01-27 13:48:26 UTC
Patch posted on the list:

https://listman.redhat.com/archives/libvir-list/2022-January/msg01234.html

Comment 2 Jing Qi 2022-01-29 02:02:34 UTC
Michal,can you please help to make sure if the migration issue also be fixed in above patch? Thanks

<memory model='virtio-mem'>
      <source>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>131072</requested>
      </target>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </memory>

virsh migrate rhel9  qemu+ssh://dell-per740xd-27.lab.eng.pek2.redhat.com/system --live --

error: internal error: qemu unexpectedly closed the monitor: 2022-01-29T01:57:12.336180Z qemu-kvm: -device virtio-mem-pci,node=0,block-size=2097152,requested-size=134217728,memdev=memvirtiomem0,id=virtiomem0,bus=pcie.0,addr=0x1: 'virtio-mem-pci' is not a valid device model name

Comment 3 Jing Qi 2022-01-29 04:30:49 UTC
More info about above comment, the vm is migrated from rhel9 to rhel8.6. But rhel8.6 still doesn't support virtio-mem.  The error message can to be enhanced.
For migrating vm with virtio-mem from rhel9 to rhel9, I filed a new bug 2048022.

Comment 4 Michal Privoznik 2022-01-31 08:38:39 UTC
(In reply to Jing Qi from comment #2)
>
> error: internal error: qemu unexpectedly closed the monitor:
> 2022-01-29T01:57:12.336180Z qemu-kvm: -device
> virtio-mem-pci,node=0,block-size=2097152,requested-size=134217728,
> memdev=memvirtiomem0,id=virtiomem0,bus=pcie.0,addr=0x1: 'virtio-mem-pci' is
> not a valid device model name

Huh, so this indeed is a problem, but again not specific to virtio-mem. It only demonstrates itself via virtio-mem because that's one of the few differences between RHEL-9 and RHEL-8.6 QEMUs. But in general, XMLs used in migration or save/restore of domain are not validated. Let me open it as a new bug.

Comment 5 Michal Privoznik 2022-02-02 13:27:15 UTC
Merged upstream as:

af23241cfe qemu_command: Generate memory only after controllers

v8.0.0-260-gaf23241cfe

Comment 6 Michal Privoznik 2022-02-04 13:19:32 UTC
To POST:

https://gitlab.com/redhat/rhel/src/libvirt/-/merge_requests/9

Comment 7 Jing Qi 2022-02-11 06:26:52 UTC
Tested with version- libvirt-daemon-8.0.0-4.el9.x86_64 & qemu-kvm-6.2.0-7.el9.x86_64

1. Start vm with virtio-mem device with  below pci address 

 <memory model='virtio-mem'>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>131072</requested>
        <current unit='KiB'>131072</current>
      </target>
      <alias name='virtiomem0'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </memory>

# virsh start rhel_i
Domain 'rhel_i' started

2. Attach device and dumpxml to check the memory device is attached.

#virsh attach-device rhel_i  virtiomem.xml
Device attached successfully

virtiomem.xml -


 <memory model='virtio-mem'>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>131072</requested>
        <current unit='KiB'>131072</current>
      </target>
      <alias name='virtiomem1'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memory>

Comment 10 Jing Qi 2022-02-14 04:49:55 UTC
Mark it to verified according to Comment 7

Comment 12 errata-xmlrpc 2022-05-17 12:46:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: libvirt), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2390