Try to boot Fedora-Workstation-Live-x86_64-39-20230905.n.0.iso with virt-manager on f39 host,but failed,just "Display output is not active" on black screen is shown. Reproducible: Always
Created attachment 1987697 [details] journal
Created attachment 1987698 [details] screencast
Proposed as a Blocker for 39-beta by Fedora user lnie using the blocker tracking app because: This violates: The release must be able host virtual guest instances of the same release.
This works for me, I've been testing this way for weeks. How is your VM configured? If you create a new VM, do you still have this problem? Thanks!
Created attachment 1987801 [details] screencast
> How is your VM configured? Nothing special, just set it to uefi boot. > If you create a new VM, do you still have this problem? Thanks! yes
Ah,I guess I found the cause,I set the Firmware to UEFI x86_64: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd.If I create a new VM with other Firmware,it works. VMs with that fireware works well on f37 and f38 hosts,I guess that firmware need to be updated for f39,and this bug doesn't need to be a blocker?
Ah, that's interesting - I actually saw the same thing a few days ago, but I'd never tried using that firmware setting before so I assumed it had been broken before. It's...a bit worrying if that firmware doesn't work, but we'd best ask an expert...let's re-assign to edk2 for now...
> but I'd never tried using that firmware setting before so I assumed it had been broken before I've been using it all the time,when I try to test UEFI back to BIOS time,and of course,now. > let's re-assign to edk2 for now... Thanks for doing the favor.
> > but I'd never tried using that firmware setting before so I assumed it had been broken before > I've been using it all the time,when I try to test UEFI back to BIOS time That being said,it might had been broken before, not sure,I just test UEFI occasionally, back then:)
My f39 host doesn't have a problem booting Fedora-Workstation-Live-x86_64-39-20230910.n.0.iso . New VM, choose ISO, set ram to 8096, disable storage, Customize Install, Select /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd, and it boots to desktop lnie is it 100% reproducible for you? does f38 workstation media fail too?
> lnie is it 100% reproducible for you? does f38 workstation media fail too? Yes,it's 100% reproducible,and f38 failed too.Actually,I found the bug on a beaker machine having f39 workstation system, and then I reproduce it on my local machine,I know hardware are not related,just want to highlight that I see this bug Not only on one f39 host. But when I tried with exactly what you said just now,I found what I'm using on f39 system is /usr/share/edk2/ovmf/OVMF_CODE_4M.secboot.qcow2(there is only /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd on f38 host by default),sorry,I should check more carefully.
Created attachment 1988089 [details] screencast
So this bug only happens,and 100% reproducible with /usr/share/edk2/ovmf/OVMF_CODE_4M.secboot.qcow2.
I have tested both OVMF_CODE_4M.secboot and OVMF_CODE.secboot.fd as a VM firmware on Fedora 39 fully updated. I cannot reproduce the bug, the VMs boot just fine not only into the LIVE mode, but also after installation. This holds true for both F39 Workstation image and Rawhide Workstation image. I have not tested any other EFI modes yet, but if there is need, I can do it.
The mixed testing results suggest that there are some configuration problems ... So, some background: F38 has: (1) OVMF_CODE.secboot.fd + OVMF_VARS.secboot.fd. F39 has: (1) OVMF_CODE.secboot.fd + OVMF_VARS.secboot.fd. (2) OVMF_CODE_4M.secboot.qcow2 + OVMF_VARS_4M.secboot.qcow2 Expected behavior: * Existing VMs (i.e. after F38 -> F39 upgrade) should continue to use (1). * Newly created VMs (virt-install --boot uefi) on F39 should use (2). You can't freely mix and match code and vars, and when using the 4M qcow2 builds you also have to take care to specify the format. libvirt xml for (2) looks like this: <os firmware='efi'> <type arch='x86_64' machine='pc-q35-5.0'>hvm</type> <firmware> <feature enabled='yes' name='enrolled-keys'/> <feature enabled='yes' name='secure-boot'/> </firmware> <loader readonly='yes' secure='yes' type='pflash' format='qcow2'>/usr/share/edk2/ovmf/OVMF_CODE_4M.secboot.qcow2</loader> <nvram template='/usr/share/edk2/ovmf/OVMF_VARS_4M.secboot.qcow2' format='qcow2'>/var/lib/libvirt/qemu/nvram/fedora-org-base_VARS.qcow2</nvram> </os>
Testing here, cannot reproduce the bug, even using OVMF_CODE_4M.secboot.qcow2 on F39 workstation, VM worked fine.
This could be libvirt version related. libvirt-9.7.0-1.fc39 is in updates-testing and is known to have some firmware autoselection fixes. I don't know what version I was using and I dismantled my setup. lnie are you on libvirt 9.6.0 ? does updating make a difference?
Created attachment 1988142 [details] VM settings at virt-manager
Created attachment 1988153 [details] VM booted normally at Virt-manager on terminal is the host system version
here's the system is at libvirt-9.7.0-1.fc39.x86_64
I do not have the libvirt on my system like @geraldo.simiao.kutz. Instead I have python3-libvirt-9.7.0-1.fc39.x86_64
> lnie are you on libvirt 9.6.0 ? does updating make a difference? Yes,and the problem is gone after I updated it to libvirt-9.7.0-1.fc39.x86_64,thanks
(In reply to Lukas Ruzicka from comment #22) > I do not have the libvirt on my system like @geraldo.simiao.kutz. > Instead I have > > python3-libvirt-9.7.0-1.fc39.x86_64 yeah, for libvirtd to run I must install the libvirt package, because the iso came without it.
Created attachment 1988180 [details] packages installed with the current iso Fedora-Workstation-Live-x86_64-39-20230910.n.0
Created attachment 1988181 [details] I installed libvirt for the libvirtd.service to run Because gnome boxes works fine but virt-manager need the libvirtd.servie runnning
and then I updated to new version on testing libvirt-9.7.0-1.fc39.x86_64
The fixed libvirt is only in updates-testing, so the bug should not be closed.
FEDORA-2023-57fd2e3393 has been submitted as an update to Fedora 39. https://bodhi.fedoraproject.org/updates/FEDORA-2023-57fd2e3393
Discussed during the 2023-09-11 blocker review meeting: [0] The decision to classify this bug as a "RejectedBlocker (Beta)" was made as we don't believe GNOME Boxes offers the affected configuration, and anyone running into it with any other tool should have updates available at the time, so it's fine for this to go as a regular update. [0] https://meetbot.fedoraproject.org/fedora-blocker-review/2023-09-11/f39-blocker-review.2023-09-11-16.00.txt
>I do not have the libvirt on my system like @geraldo.simiao.kutz. Instead I have > python3-libvirt-9.7.0-1.fc39.x86_64 python-libvirt depends on the API provided by libvirt,I'm afraid you could not install python3-libvirt without libvirt installed. > The fixed libvirt is only in updates-testing, so the bug should not be closed. Ah,right,I was just trying to move this bug from the blocker list to save you guys'time last night,as it's fixed and clearly not a blocker. So...,dear fedora expert:),next time,I should close the blocker issue ticket instead?
As the person who proposed the bug as a blocker, you can "unpropose" it by clearing the blocker tracker from the Blocks: field. So, make it not block the Beta blocker tracker bug any more. We already rejected it at the meeting today, but in future you can do that :)
> As the person who proposed the bug as a blocker, you can "unpropose" it by clearing the blocker tracker from the Blocks: field. So, make it not block the Beta blocker tracker bug any more. We already rejected it at >the meeting today, but in future you can do that :) Thanks Adam,will try that way next time:)
(In reply to Cole Robinson from comment #18) > This could be libvirt version related. libvirt-9.7.0-1.fc39 is in > updates-testing and is known to have some firmware autoselection fixes. FWIW, I initially thought that the fixes made in 9.7.0 would not affect this scenario; after looking more closely, however, I have come to the conclusion that they absolutely do. More specifically, when selecting the firmware in question virt-manager will generate an XML that looks like <os> <type arch='x86_64' machine='q35'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/edk2/ovmf/OVMF_CODE_4M.secboot.qcow2</loader> </os> which libvirt 9.6.0 interprets as <os> <type arch='x86_64' machine='q35'>hvm</type> <loader readonly='yes' type='pflash' format='raw'>/usr/share/edk2/ovmf/OVMF_CODE_4M.secboot.qcow2</loader> </os> and, since the only firmware descriptor for that path is one that says its format is qcow2, it can't find any matches: qemuFirmwareMatchDomain:1171 : No matching path in '/usr/share/qemu/firmware/41-edk2-ovmf-2m-raw-x64-sb.json' qemuFirmwareMatchDomain:1294 : Discarding loader with mismatching flash format 'qcow2' != 'raw' qemuFirmwareMatchDomain:1171 : No matching path in '/usr/share/qemu/firmware/50-edk2-ovmf-x64-microvm.json' Notice how it skipped over /usr/share/qemu/firmware/50-edk2-ovmf-4m-qcow2-x64-nosb.json which is the firmware descriptor we want. This limitation has been addressed by commit 10a8997cbb402f7edb9f970af70feee2fc256a1c Author: Andrea Bolognani <abologna> Date: Tue May 16 19:50:50 2023 +0200 conf: Don't default to raw format for loader/NVRAM Due to the way the information is stored by the XML parser, we've had this quirk where specifying any information about the loader or NVRAM would implicitly set its format to raw. That is, <nvram>/path/to/guest_VARS.fd</nvram> would effectively be interpreted as <nvram format='raw'>/path/to/guest_VARS.fd</nvram> forcing the use of raw format firmware even when qcow2 format would normally be preferred based on the ordering of firmware descriptors. This behavior can be worked around in a number of ways, but it's fairly unintuitive. In order to remove this quirk, move the selection of the default firmware format from the parser down to the individual drivers. Most drivers only support raw firmware images, so they can unconditionally set the format early and be done with it; the QEMU driver, however, supports multiple formats and so in that case we want this default to be applied as late as possible, when we have already ruled out the possibility of using qcow2 formatted firmware images. Signed-off-by: Andrea Bolognani <abologna> Reviewed-by: Michal Privoznik <mprivozn> https://gitlab.com/libvirt/libvirt/-/commit/10a8997cbb402f7edb9f970af70feee2fc256a1c At the time I thought I was simply improving the user experience for a fairly narrow corner case, but it's now clear to me that this change is what allows the XML generated by virt-manager in this scenario to be interpreted correctly. In retrospect, I sure am glad that I didn't resist the urge to scratch this specific itch! O:-)
(In reply to Geraldo Simião from comment #26) > I installed libvirt for the libvirtd.service to run > > Because gnome boxes works fine but virt-manager need the libvirtd.servie > runnning Note that it's technically not necessary to have the libvirt-daemon package installed: Fedora uses modular daemons by default, and virt-manager is able to connect directly to virtqemud without going through the legacy monolithic daemon (libvirtd). However, the message that is presented on startup is misleading: The libvirtd service doesn't appear to be installed. Install and run the libvirtd service to manage virtualization on this host. A virtualization connection can be manually added via File -> Add Connection. This is not going to be a big problem in practice because libvirt 9.7.0 contains commit aa5895cbc72bd9b4bb1ce99e231b2ac4b25db9c4 Author: Andrea Bolognani <abologna> Date: Wed Aug 30 17:45:47 2023 +0200 rpm: Recommend libvirt-daemon for with_modular_daemons distros A default deployment on modern distros uses modular daemons but switching back to the monolithic daemon, while not recommended, is still considered a perfectly valid option. For a monolithic daemon deployment, the upgrade to libvirt 9.2.0 or newer works as expected; a subsequent call to dnf autoremove, however, results in the libvirt-daemon package being removed and the deployment no longer working. In order to avoid that situation, mark the libvirt-daemon as recommended. This will unfortunately result in it being included in most installations despite not being necessary, but considering that the alternative is breaking existing setups on upgrade it feels like a reasonable tradeoff. Moreover, since the dependency on libvirt-daemon is just a weak one, it's still possible for people looking to minimize the footprint of their installation to manually remove the package after installation, mitigating the drawbacks of this approach. https://bugzilla.redhat.com/show_bug.cgi?id=2232805 Signed-off-by: Andrea Bolognani <abologna> Reviewed-by: Erik Skultety <eskultet> Reviewed-by: Daniel P. Berrangé <berrange> https://gitlab.com/libvirt/libvirt/-/commit/aa5895cbc72bd9b4bb1ce99e231b2ac4b25db9c4 which means that libvirt-daemon is going to be present on the system unless the user manually uninstalled it. That said, it would be nice if virt-manager didn't show a misleading message in the first place. Cole, do you think it would be a lot of work to make it detect virtqemud in addition to libvirtd?
FEDORA-2023-57fd2e3393 has been pushed to the Fedora 39 stable repository. If problem still persists, please make note of it in this bug report.
(In reply to Andrea Bolognani from comment #35) > (In reply to Geraldo Simião from comment #26) > > I installed libvirt for the libvirtd.service to run > > > > Because gnome boxes works fine but virt-manager need the libvirtd.servie > > runnning > > Note that it's technically not necessary to have the libvirt-daemon > package installed: Fedora uses modular daemons by default, and > virt-manager is able to connect directly to virtqemud without going > through the legacy monolithic daemon (libvirtd). > > However, the message that is presented on startup is misleading: > > The libvirtd service doesn't appear to be installed. Install and > run the libvirtd service to manage virtualization on this host. > > A virtualization connection can be manually added via File -> Add > Connection. > Thanks for catching that. A lot of the messaging virt-manager has around this is out of date. I dropped some of the blocking checks and tried to generalize the error messages: https://github.com/virt-manager/virt-manager/commit/775edfd5dc668c26ffbdf07f6404ca80d91c3a3a
(In reply to Cole Robinson from comment #37) > (In reply to Andrea Bolognani from comment #35) > > Note that it's technically not necessary to have the libvirt-daemon > > package installed: Fedora uses modular daemons by default, and > > virt-manager is able to connect directly to virtqemud without going > > through the legacy monolithic daemon (libvirtd). > > > > However, the message that is presented on startup is misleading: > > > > The libvirtd service doesn't appear to be installed. Install and > > run the libvirtd service to manage virtualization on this host. > > > > A virtualization connection can be manually added via File -> Add > > Connection. > > Thanks for catching that. > > A lot of the messaging virt-manager has around this is > out of date. I dropped some of the blocking checks and tried to > generalize the error messages: > > https://github.com/virt-manager/virt-manager/commit/775edfd5dc668c26ffbdf07f6404ca80d91c3a3a Looks good, thanks a lot Cole!