We have at least four cases of users in ask.fedora who cannot create virtual machines if SWTPM is used and if SELinux is disabled: if SWTPM is not used or if SELinux is disabled, everything works fine. They can start virtual machines with SWTPM if they already exist, so the issue is limited to create them. This issue is similar to BZ #2272971 (we originally thought it is the same), but the solution of #2272971 does not work for them, and in all cases, it is the SWTPM issue. Just like #2272971 , the issue has occurred first after upgrading to F40. Yet, although the behavior seems to be equal in all cases, the denials are not equal although similar. I assume the difference in the denials is in the configuration of the host of the guests or maybe the virtualization tool they use (at least in one case it is definitely `virt-manager`). Based on the `ausearch` outputs of the users, the manifests can be separated in four similar but not equal "bunch of SELinux denial entries": 1) only a denial with `comm=qemu-img` (related ask.fedora ticket: [1]) 2) denials with `comm=qemu-img`, `comm=rpc-virtqemud`, and `comm=swtpm` (related ask.fedora ticket: [2]) 3) denials with `comm="rpc-virtqemud"` and `with comm="swtpm"` (related ask.fedora ticket: [3]) 4) only a denial with `comm=swtpm` (related ask.fedora ticket: [4] -> I asked the user to not yet create another ticket since it widely resembles the same issue; so [3] and [4] are posts in the same page, on which we have two manifests documented) Based on the reports we have so far, the behavior is equal in all cases. Here are the `ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today` outputs of the four manifests: [1] ``` type=AVC msg=audit(04/24/2024 14:19:15.239:260) : avc: denied { create } for pid=4518 comm=qemu-img anonclass=[io_uring] scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1 ``` [2] ``` ---- type=AVC msg=audit(04/30/2024 20:34:01.442:238) : avc: denied { create } for pid=3573 comm=qemu-img anonclass=[io_uring] scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1 ---- type=AVC msg=audit(04/30/2024 20:34:01.442:239) : avc: denied { map } for pid=3573 comm=qemu-img path=anon_inode:[io_uring] dev="anon_inodefs" ino=29206 scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1 ---- type=AVC msg=audit(04/30/2024 20:34:01.442:240) : avc: denied { read write } for pid=3573 comm=qemu-img path=anon_inode:[io_uring] dev="anon_inodefs" ino=29206 scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1 ---- type=AVC msg=audit(04/30/2024 20:34:04.104:244) : avc: denied { add_name } for pid=1719 comm=rpc-virtqemud name=win11-swtpm.log scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=dir permissive=1 ---- type=AVC msg=audit(04/30/2024 20:34:04.105:245) : avc: denied { create } for pid=1719 comm=rpc-virtqemud name=win11-swtpm.log scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(04/30/2024 20:34:04.105:246) : avc: denied { write } for pid=1719 comm=rpc-virtqemud path=/var/log/swtpm/libvirt/qemu/win11-swtpm.log dev="dm-0" ino=365568 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(04/30/2024 20:34:04.105:247) : avc: denied { setattr } for pid=1719 comm=rpc-virtqemud name=win11-swtpm.log dev="dm-0" ino=365568 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(04/30/2024 20:34:04.130:249) : avc: denied { getattr } for pid=1719 comm=rpc-virtqemud name=/ dev="dm-0" ino=256 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:fs_t:s0 tclass=filesystem permissive=1 ---- type=AVC msg=audit(04/30/2024 20:34:04.139:250) : avc: denied { open } for pid=3586 comm=swtpm path=/var/log/swtpm/libvirt/qemu/win11-swtpm.log dev="dm-0" ino=365568 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=0 ``` [3] ``` time->Thu Apr 25 18:01:14 2024 type=AVC msg=audit(1714093274.173:279): avc: denied { relabelfrom } for pid=6652 comm="rpc-virtqemud" name="1-fedora39-40-TPM-Upg" dev="tmpfs" ino=2915 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:virt_var_run_t:s0 tclass=dir permissive=1 ---- time->Thu Apr 25 18:01:14 2024 type=AVC msg=audit(1714093274.226:281): avc: denied { add_name } for pid=6422 comm="rpc-virtqemud" name="fedora39-40-TPM-Upg-swtpm.log" scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=dir permissive=1 ---- time->Thu Apr 25 18:01:14 2024 type=AVC msg=audit(1714093274.226:282): avc: denied { create } for pid=6422 comm="rpc-virtqemud" name="fedora39-40-TPM-Upg-swtpm.log" scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1 ---- time->Thu Apr 25 18:01:14 2024 type=AVC msg=audit(1714093274.226:283): avc: denied { write } for pid=6422 comm="rpc-virtqemud" path="/var/log/swtpm/libvirt/qemu/fedora39-40-TPM-Upg-swtpm.log" dev="dm-0" ino=878537 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1 ---- time->Thu Apr 25 18:01:14 2024 type=AVC msg=audit(1714093274.226:284): avc: denied { setattr } for pid=6422 comm="rpc-virtqemud" name="fedora39-40-TPM-Upg-swtpm.log" dev="dm-0" ino=878537 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=1 ---- time->Thu Apr 25 18:01:14 2024 type=AVC msg=audit(1714093274.272:286): avc: denied { getattr } for pid=6422 comm="rpc-virtqemud" name="/" dev="dm-0" ino=256 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:fs_t:s0 tclass=filesystem permissive=1 ---- time->Thu Apr 25 18:01:14 2024 type=AVC msg=audit(1714093274.283:287): avc: denied { open } for pid=6659 comm="swtpm" path="/var/log/swtpm/libvirt/qemu/fedora39-40-TPM-Upg-swtpm.log" dev="dm-0" ino=878537 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file permissive=0 ---- ``` [4] ``` ---- type=AVC msg=audit(01/05/24 09:26:02.155:584) : avc: denied { write } for pid=113794 comm=swtpm path=/run/libvirt/qemu/swtpm/1-MyVM-swtpm.pid dev="tmpfs" ino=4456 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=file permissive=0 ---- type=AVC msg=audit(01/05/24 09:26:02.157:585) : avc: denied { write } for pid=113794 comm=swtpm name=swtpm dev="tmpfs" ino=4242 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=dir permissive=0 ``` The user with the manifest [2] offered a journalctl in which they booted, provoked the issue and then stored the journalctl output: See the link [5] Further `journalctl` outputs and `ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today` with the issue logged are available in the ask.fedora topics. [1] https://discussion.fedoraproject.org/t/unable-to-create-new-virt-manager-vm-with-software-tpm-on-fedora-40/114254/4 (in this topic, we originally expected it is equal to #2272971 ) [2] https://discussion.fedoraproject.org/t/creating-new-vm-with-tpm-using-virt-manager-results-in-selinux-related-error/114917 [3] https://discussion.fedoraproject.org/t/tpm-does-not-work-virt-manager-fedora-40/114455 [4] https://discussion.fedoraproject.org/t/tpm-does-not-work-virt-manager-fedora-40/114455/10 (this links to a later post in the page of [3]; this page contains both manifests) [5] https://easyupload.io/m/e1dei3 (full output of a boot in journalctl & ausearch of [2]) Reproducible: Always Steps to Reproduce: See details: 1. use F40 2. try to create a VM with SWTPM Actual Results: error due to SELinux denials Expected Results: creating successfully new VM with SWTPM
Supplement: the user of [2] (from whom the logs [5] are) just reported that they re-installed F40 another time and now it works for them. It has to be noted that this user had the issue on a Fedora that also was also a fresh F40 install. So the issue definitely has occurred when upgraded and when freshly installed. However, the user noted that their new installation has differences to the first one: See post https://discussion.fedoraproject.org/t/creating-new-vm-with-tpm-using-virt-manager-results-in-selinux-related-error/114917/8 A difference I might consider to be the reason for the issue to be solved is that they updated all packages as of today and thus installed today new packages that were not installed during the last test. I ask the other users to also do another update with --refresh and ask them to report here if an update solves their issue now.
Supplement to the supplement: the same user now reports that it no longer works (again): based upon their current report, it worked only immediately after installing `sudo dnf group install --with-optional virtualization` but after a reboot, it again no longer worked. The issue is again permanent / always reproducible. They elaborated here (and also provided a current ausearch and a current journalctl extract): https://discussion.fedoraproject.org/t/creating-new-vm-with-tpm-using-virt-manager-results-in-selinux-related-error/114917/10
I cannot reproduce the issue that users are seeing. I have a machine that has been upgraded to Fedora 40. I can start existing VMs and define new ones. All operations are done using 'virsh'. Example for creating a new VM where /var/lib/libvirt/images/TestVM.qcow2 was created beforehand. <domain type='kvm'> <name>TestVM</name> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <vcpu placement='static'>8</vcpu> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-3.0'>hvm</type> <bootmenu enable='no'/> </os> <features> <acpi/> <apic/> <vmport state='off'/> </features> <cpu mode='host-model' check='full'/> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/TestVM.qcow2'/> <target dev='vda' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </controller> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'/> <interface type='network'> <mac address='52:54:00:ae:f0:fa'/> <source network='default'/> <model type='rtl8139'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0'/> </tpm> <graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'/> </domain> # virsh define TestVM.xml # virsh start TestVM Domain 'TestVM' started # ps auxZ | grep TestVM system_u:system_r:svirt_t:s0:c630,c867 tss 334705 0.6 0.0 11424 6656 ? S 07:53 0:00 /usr/bin/swtpm socket --ctrl type=unixio,path=/run/libvirt/qemu/swtpm/1-TestVM-swtpm.sock,mode=0600 --tpmstate dir=/var/lib/libvirt/swtpm/2318b397-0049-4fc7-ac8e-c8f438bf51b5/tpm2,mode=0600 --log file=/var/log/swtpm/libvirt/qemu/TestVM-swtpm.log --terminate --tpm2 system_u:system_r:svirt_t:s0:c630,c867 qemu 334715 102 0.7 3436572 386628 ? Sl 07:53 0:24 /bin/qemu-system-x86_64 -name guest=TestVM,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-TestVM/master-key.aes"} [...] # getenforce Enforcing # cat /etc/fedora-release Fedora release 40 (Forty) # rpm -q -a | grep swtpm swtpm-libs-0.8.1-5.fc40.x86_64 swtpm-0.8.1-5.fc40.x86_64 swtpm-selinux-0.8.1-5.fc40.noarch swtpm-tools-0.8.1-5.fc40.x86_64 swtpm-devel-0.8.1-5.fc40.x86_64 # rpm -q -a | grep libvirt libvirt-libs-10.1.0-1.fc40.x86_64 libvirt-daemon-lock-10.1.0-1.fc40.x86_64 libvirt-daemon-log-10.1.0-1.fc40.x86_64 libvirt-daemon-plugin-lockd-10.1.0-1.fc40.x86_64 libvirt-daemon-proxy-10.1.0-1.fc40.x86_64 python3-libvirt-10.1.0-1.fc40.x86_64 libvirt-client-qemu-10.1.0-1.fc40.x86_64 libvirt-glib-5.0.0-3.fc40.x86_64 libvirt-client-10.1.0-1.fc40.x86_64 libvirt-daemon-common-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-core-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-network-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-nwfilter-10.1.0-1.fc40.x86_64 libvirt-daemon-config-network-10.1.0-1.fc40.x86_64 libvirt-daemon-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-interface-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-nodedev-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-secret-10.1.0-1.fc40.x86_64 libvirt-daemon-config-nwfilter-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-lxc-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-disk-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-gluster-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-iscsi-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-iscsi-direct-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-logical-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-mpath-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-rbd-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-scsi-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-libxl-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-vbox-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-zfs-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-qemu-10.1.0-1.fc40.x86_64 libvirt-daemon-kvm-10.1.0-1.fc40.x86_64 libvirt-10.1.0-1.fc40.x86_64 Staring an existing VM with attached vTPM also works: # virsh start Fedora28_ClevisTang Domain 'Fedora28_ClevisTang' started # ps auxZ | grep Fedora28 | grep -v grep system_u:system_r:svirt_t:s0:c9,c109 tss 334840 0.4 0.0 11064 6656 ? S 07:59 0:00 /usr/bin/swtpm socket --ctrl type=unixio,path=/run/libvirt/qemu/swtpm/2-Fedora28_ClevisTang-swtpm.sock,mode=0600 --tpmstate dir=/var/lib/libvirt/swtpm/0b39eaf3-8967-4750-a6a3-962d7e280013/tpm2,mode=0600 --log file=/var/log/swtpm/libvirt/qemu/Fedora28_ClevisTang-swtpm.log --terminate --tpm2 system_u:system_r:svirt_t:s0:c9,c109 qemu 334849 92.1 0.5 3058072 295668 ? Sl 07:59 0:15 /bin/qemu-system-x86_64 -name guest=Fedora28_ClevisTang,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-2-Fedora28_ClevisTang/master-key.aes"} [...] Can you please let me know how the users create VMs with the exact reproducible steps including domain XML and command lines. In the meantime I will see what I can do on a freshly installed Fedora 40 where I will also use virsh to define and start VMs.
(In reply to Stefan Berger from comment #3) > Can you please let me know how the users create VMs with the exact > reproducible steps including domain XML and command lines. In the meantime I > will see what I can do on a freshly installed Fedora 40 where I will also > use virsh to define and start VMs. Hello, I'm the user from Fedora Discussion forum who reported that creating a VM with TPM only works right after installing virtualization tools, before first reboot. So, using your VM config and commands I get: $ virsh define TestVM.xml Domain 'TestVM' defined from TestVM.xml $ virsh start TestVM error: Failed to start domain 'TestVM' error: Network not found: no network with matching name 'default' Specifying a connection: $ virsh --connect qemu:///system define TestVM.xml Domain 'TestVM' defined from TestVM.xml $ virsh --connect qemu:///system start TestVM error: Failed to start domain 'TestVM' error: internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/TestVM-swtpm.log' for details. $ sudo cat /var/log/swtpm/libvirt/qemu/TestVM-swtpm.log swtpm at /usr/bin/swtpm does not support TPM 2
I have not been able to reproduce the issue on a newly installed Fedora 40 system, either. A reboot of the system did NOT change the outcome of the following test, either: I was also able to define and start a very simple VM just as well: <domain type='kvm'> <name>PLAIN-TPM-VM</name> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>512288</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64'>hvm</type> <boot dev='hd'/> <bootmenu enable='yes'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/plainvm.raw'/> <target dev='vda' bus='virtio'/> </disk> <controller type='usb' index='0' model='piix3-uhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0'/> </tpm> <graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> <audio id='1' type='none'/> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memballoon> </devices> </domain> # virsh define plainvm.xml Domain 'PLAIN-TPM-VM' defined from plainvm.xml # virsh start PLAIN-TPM-VM Domain 'PLAIN-TPM-VM' started # ps auxZ | grep PLAIN | grep -v grep system_u:system_r:svirt_t:s0:c755,c802 tss 17399 1.6 0.3 11064 6568 ? S 07:53 0:00 /usr/bin/swtpm socket --ctrl type=unixio,path=/run/libvirt/qemu/swtpm/6-PLAIN-TPM-VM-swtpm.sock,mode=0600 --tpmstate dir=/var/lib/libvirt/swtpm/19945789-8507-40e1-b3d1-cec1dc500d44/tpm2,mode=0600 --log file=/var/log/swtpm/libvirt/qemu/PLAIN-TPM-VM-swtpm.log --terminate --tpm2 system_u:system_r:svirt_t:s0:c755,c802 qemu 17401 80.9 2.2 2558216 45208 ? Sl 07:53 0:03 /usr/bin/qemu-system-x86_64 -name guest=PLAIN-TPM-VM,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-6-PLAIN-TPM-VM/master-key.aes"} [...] # getenforce Enforcing # cat /etc/fedora-release Fedora release 40 (Forty) # rpm -q -a | grep swtpm swtpm-libs-0.8.1-5.fc40.x86_64 swtpm-0.8.1-5.fc40.x86_64 swtpm-selinux-0.8.1-5.fc40.noarch swtpm-tools-0.8.1-5.fc40.x86_64 # rpm -q -a | grep libvirt libvirt-libs-10.1.0-1.fc40.x86_64 libvirt-client-10.1.0-1.fc40.x86_64 libvirt-daemon-common-10.1.0-1.fc40.x86_64 libvirt-daemon-log-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-qemu-10.1.0-1.fc40.x86_64 libvirt-daemon-lock-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-nwfilter-10.1.0-1.fc40.x86_64 libvirt-daemon-config-nwfilter-10.1.0-1.fc40.x86_64 libvirt-daemon-plugin-lockd-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-libxl-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-core-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-disk-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-iscsi-direct-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-logical-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-mpath-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-rbd-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-scsi-10.1.0-1.fc40.x86_64 python3-libvirt-10.1.0-1.fc40.x86_64 libvirt-client-qemu-10.1.0-1.fc40.x86_64 libvirt-daemon-proxy-10.1.0-1.fc40.x86_64 libvirt-daemon-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-vbox-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-secret-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-interface-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-gluster-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-nodedev-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-iscsi-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-network-10.1.0-1.fc40.x86_64 libvirt-daemon-config-network-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-lxc-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-zfs-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-10.1.0-1.fc40.x86_64 libvirt-10.1.0-1.fc40.x86_64 My suggestion at this point is to following the suggestion here: https://bugzilla.redhat.com/show_bug.cgi?id=2272971#c27 Therefore, update the system to the latest version (especially SELinux policy) and see whether this resolves the issue. For the above experiment I had the following policy installed: # rpm -q -a | grep selinux-poli selinux-policy-40.13-1.fc40.noarch selinux-policy-targeted-40.13-1.fc40.noarch
We have already suggested the affected users to test all approaches of #2272971 . Some also tried additionally autorelabeling, which was an earlier suggestion in #2272971. But for the users that complained to have issues only when swtpm is used, none of the approaches did solve the issue. That's why I opened that ticket. I have asked users in all three topics to provide the information about the exact steps and tools, which you have asked for, in here so that you can evaluate it [1] [2] [3]. I suggest to wait a day or two to allow some of them to provide the details. Further, although I would wonder if virsh could not provoke the issue while other tools do, it maybe makes sense nevertheless to test the tools of the users we already know, even if some tools are assumed to do in the backend the same as virsh: in all three topics, we have users complaining about the issue when using virt-manager. So I expect that these users have gone the whole way from setting up the environment, to creating and using virtual machines with virt-manager. [1] https://discussion.fedoraproject.org/t/tpm-does-not-work-virt-manager-fedora-40/114455/18 [2] https://discussion.fedoraproject.org/t/unable-to-create-new-virt-manager-vm-with-software-tpm-on-fedora-40/114254/23 [3] https://discussion.fedoraproject.org/t/creating-new-vm-with-tpm-using-virt-manager-results-in-selinux-related-error/114917/13
I am one that has been having issues starting VMs that use tpm. I have checked the SELinux policy that is installed and it was version 40.17-1. I have installed version 40.13-1 and the issue is resolved, all VMs now start correctly even if using TPM.
Interesting. bug2k24 was the case number 4 that was added today, where only the isolated "comm=swtpm" denial but none of the other denials was logged. What tool did/do you use if I may ask? Further, another user just reported to have solved the issue by downgrading the packages from swtpm-0.8.1-5.fc40 to swtpm-0.8.1-4.fc40 (swtpm, swtpm-lib, swtpm-tools, swtpm-selinux) -> see his elaboration in [1] [1] https://discussion.fedoraproject.org/t/creating-new-vm-with-tpm-using-virt-manager-results-in-selinux-related-error/114917/14
(In reply to bug2k24 from comment #7) > I am one that has been having issues starting VMs that use tpm. > I have checked the SELinux policy that is installed and it was version > 40.17-1. I have installed version 40.13-1 and the issue is resolved, all VMs > now start correctly even if using TPM. So you did a downgrade of the SELinux policy from 40.17-1 to 40.13-1 to make it work? I now UPGRADED my system to 40.17-1 and I do see the same issue now: # virsh start PLAIN-TPM-VM error: Failed to start domain 'PLAIN-TPM-VM' error: internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/PLAIN-TPM-VM-swtpm.log' for details. # tail -n2 /var/log/swtpm/libvirt/qemu/PLAIN-TPM-VM-swtpm.log swtpm at /usr/bin/swtpm does not support TPM 2 swtpm at /usr/bin/swtpm does not support TPM 2 At this point I can also only downgrade the SELinux policy to fix the issue: # dnf downgrade selinux-policy Last metadata expiration check: 3:01:36 ago on Wed 01 May 2024 05:35:59 AM CDT. [... 40.13-1 is being installed ] And now it starts again: # virsh start PLAIN-TPM-VM Domain 'PLAIN-TPM-VM' started So, something is broken in selinux-policy-40.17-1.fc40.noarch.
(In reply to Christopher Klooz from comment #8) > Interesting. bug2k24 was the case number 4 that was added today, where only > the isolated "comm=swtpm" denial but none of the other denials was logged. > What tool did/do you use if I may ask? > > Further, another user just reported to have solved the issue by downgrading > the packages from swtpm-0.8.1-5.fc40 to swtpm-0.8.1-4.fc40 (swtpm, > swtpm-lib, swtpm-tools, swtpm-selinux) -> see his elaboration in [1] > Hm, not everybody can downgrade: # rpm -q -a | grep swtpm swtpm-libs-0.8.1-5.fc40.x86_64 swtpm-0.8.1-5.fc40.x86_64 swtpm-selinux-0.8.1-5.fc40.noarch swtpm-tools-0.8.1-5.fc40.x86_64 # dnf downgrade swtpm\* Last metadata expiration check: 0:08:09 ago on Wed 01 May 2024 08:38:28 AM CDT. Package swtpm-tools of lowest version already installed, cannot downgrade it. Package swtpm of lowest version already installed, cannot downgrade it. Package swtpm-libs of lowest version already installed, cannot downgrade it. Package swtpm-selinux of lowest version already installed, cannot downgrade it. Dependencies resolved. Nothing to do. Complete! How did he manage to downgrade?
I downgraded from version 40.17-1 to version 40.13-1 and this resolved the issue. Virt-manager was used to create and start the VMs.
I downloaded the rpm from here https://fedora.pkgs.org/40/fedora-aarch64/selinux-policy-40.13-1.fc40.noarch.rpm.html and installed using dnf.
(In reply to Stefan Berger from comment #5) I investigated a bit further and found this: after system booted, the libvirtd.service unit is inactive despite being enabled with 'sudo systemctl enable libvirtd'. Starting it manually with 'sudo systemctl start libvirtd' eliminates the problem with TPM until next boot. There is nothing special in the libvirtd logs: it shuts down gracefully and then just doesn't start on boot: $ sudo journalctl -u libvirtd May 01 15:44:14 lab systemd[1]: Starting libvirtd.service - libvirt legacy monolithic daemon... May 01 15:44:14 lab systemd[1]: Started libvirtd.service - libvirt legacy monolithic daemon. May 01 15:45:43 lab systemd[1]: Stopping libvirtd.service - libvirt legacy monolithic daemon... May 01 15:45:44 lab systemd[1]: libvirtd.service: Deactivated successfully. May 01 15:45:44 lab systemd[1]: Stopped libvirtd.service - libvirt legacy monolithic daemon. May 01 15:45:44 lab systemd[1]: libvirtd.service: Consumed 1.925s CPU time, 54.1M memory peak, 0B memory swap peak. Autorelabeling doesn't seem to help BTW. Here is some info about packages: $ sudo dnf update --refresh Brave Browser 18 kB/s | 3.3 kB 00:00 Fedora 40 - x86_64 29 kB/s | 23 kB 00:00 Fedora 40 openh264 (From Cisco) - x86_64 2.5 kB/s | 989 B 00:00 Fedora 40 - x86_64 - Updates 42 kB/s | 20 kB 00:00 RPM Fusion for Fedora 40 - Nonfree - NVIDIA Driver 16 kB/s | 6.6 kB 00:00 Dependencies resolved. Nothing to do. Complete! $ sudo rpm -q -a | grep swtpm swtpm-libs-0.8.1-5.fc40.x86_64 swtpm-0.8.1-5.fc40.x86_64 swtpm-selinux-0.8.1-5.fc40.noarch swtpm-tools-0.8.1-5.fc40.x86_64 $ sudo rpm -q -a | grep swtpm swtpm-libs-0.8.1-5.fc40.x86_64 swtpm-0.8.1-5.fc40.x86_64 swtpm-selinux-0.8.1-5.fc40.noarch swtpm-tools-0.8.1-5.fc40.x86_64 user@lab:~$ sudo rpm -q -a | grep libvirt libvirt-libs-10.1.0-1.fc40.x86_64 libvirt-client-10.1.0-1.fc40.x86_64 libvirt-daemon-common-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-core-10.1.0-1.fc40.x86_64 libvirt-daemon-lock-10.1.0-1.fc40.x86_64 libvirt-daemon-log-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-qemu-10.1.0-1.fc40.x86_64 libvirt-daemon-plugin-lockd-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-network-10.1.0-1.fc40.x86_64 libvirt-daemon-config-network-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-secret-10.1.0-1.fc40.x86_64 libvirt-daemon-proxy-10.1.0-1.fc40.x86_64 libvirt-glib-5.0.0-3.fc40.x86_64 libvirt-daemon-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-disk-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-gluster-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-iscsi-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-iscsi-direct-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-logical-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-mpath-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-rbd-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-scsi-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-interface-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-nodedev-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-nwfilter-10.1.0-1.fc40.x86_64 python3-libvirt-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-zfs-10.1.0-1.fc40.x86_64 libvirt-daemon-driver-storage-10.1.0-1.fc40.x86_64 libvirt-daemon-kvm-10.1.0-1.fc40.x86_64 $ sudo rpm -q -a | grep selinux-poli selinux-policy-40.17-1.fc40.noarch selinux-policy-targeted-40.17-1.fc40.noarch
(In reply to Stefan Berger from comment #10) > (In reply to Christopher Klooz from comment #8) > > Interesting. bug2k24 was the case number 4 that was added today, where only > > the isolated "comm=swtpm" denial but none of the other denials was logged. > > What tool did/do you use if I may ask? > > > > Further, another user just reported to have solved the issue by downgrading > > the packages from swtpm-0.8.1-5.fc40 to swtpm-0.8.1-4.fc40 (swtpm, > > swtpm-lib, swtpm-tools, swtpm-selinux) -> see his elaboration in [1] > > > > Hm, not everybody can downgrade: > > # rpm -q -a | grep swtpm > swtpm-libs-0.8.1-5.fc40.x86_64 > swtpm-0.8.1-5.fc40.x86_64 > swtpm-selinux-0.8.1-5.fc40.noarch > swtpm-tools-0.8.1-5.fc40.x86_64 > > # dnf downgrade swtpm\* > Last metadata expiration check: 0:08:09 ago on Wed 01 May 2024 08:38:28 AM > CDT. > Package swtpm-tools of lowest version already installed, cannot downgrade it. > Package swtpm of lowest version already installed, cannot downgrade it. > Package swtpm-libs of lowest version already installed, cannot downgrade it. > Package swtpm-selinux of lowest version already installed, cannot downgrade > it. > Dependencies resolved. > Nothing to do. > Complete! > > > How did he manage to downgrade? I gave them the link to the report here and asked them to provide further information here. I try to get rid of the bottleneck through ask.fedora :) However, they already said to have used koji (https://koji.fedoraproject.org/koji/buildinfo?buildID=2389866). I assume they downloaded the package from koji, or just offered dnf the links, which also works (so, use the link in the dnf command instead of the file name; dnf can manage that). So I assume the issue is either in selinux-policy-40.17-1.fc40.noarch or swtpm-0.8.1-4.fc40 (in the latter case, I assume it is swtpm-selinux-0.8.1-4.fc40.noarch.rpm). The two seem to collide.
Downgrading swtpm packages alone does NOT resolve the issue for me when using selinux-policy-40.17-1.fc40.noarch : # rpm -Uvh --oldpackage \ https://kojipkgs.fedoraproject.org//packages/swtpm/0.8.1/4.fc40/x86_64/swtpm-0.8.1-4.fc40.x86_64.rpm \ https://kojipkgs.fedoraproject.org//packages/swtpm/0.8.1/4.fc40/x86_64/swtpm-libs-0.8.1-4.fc40.x86_64.rpm \ https://kojipkgs.fedoraproject.org//packages/swtpm/0.8.1/4.fc40/x86_64/swtpm-tools-0.8.1-4.fc40.x86_64.rpm \ https://kojipkgs.fedoraproject.org//packages/swtpm/0.8.1/4.fc40/noarch/swtpm-selinux-0.8.1-4.fc40.noarch.rpm # virsh start PLAIN-TPM-VM error: Failed to start domain 'PLAIN-TPM-VM' error: internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/PLAIN-TPM-VM-swtpm.log' for details. It only again works after downgrade of selinux-policy to 40.13-1: # dnf downgrade selinux-policy [...] # virsh define plainvm.xml Domain 'PLAIN-TPM-VM' defined from plainvm.xml # virsh start PLAIN-TPM-VM Domain 'PLAIN-TPM-VM' started # rpm -q -a | grep swtpm swtpm-libs-0.8.1-4.fc40.x86_64 swtpm-0.8.1-4.fc40.x86_64 swtpm-selinux-0.8.1-4.fc40.noarch swtpm-tools-0.8.1-4.fc40.x86_64 Something happened already between selinux-policy 40.13 and 40.15. I will look what needs to change in swtpm's SELinux policy.
My upgraded system (from comment 3 https://bugzilla.redhat.com/show_bug.cgi?id=2278123#c3 ) is running selinux-policy-40.17-1 and there the starting of newly defined VMs works just fine and requires no update to the policy. How can this be? Anyway, a candidate for a policy update is here now: https://github.com/stefanberger/swtpm/pull/850 I will build patched swtpm packages later today or tomorrow after some more testing.
(In reply to Stefan Berger from comment #16) > My upgraded system (from comment 3 > https://bugzilla.redhat.com/show_bug.cgi?id=2278123#c3 ) is running > selinux-policy-40.17-1 and there the starting of newly defined VMs works > just fine and requires no update to the policy. How can this be? Have you ever restarted the system since the related packages have been installed? Some (re)labeling takes place at startup (tmp directories and such), and one user already confirmed the issue to occur not before the first reboot (which can be explained by the labeling on boot). So maybe your initial labeling on boot has not yet been affected by the 40.17-1 policies. Just a possibility. It's indeed a case that keeps me curious, too :)
FEDORA-2024-f53eab6892 (swtpm-0.8.1-7.fc40) has been submitted as an update to Fedora 40. https://bodhi.fedoraproject.org/updates/FEDORA-2024-f53eab6892
(In reply to Christopher Klooz from comment #17) > > > Have you ever restarted the system since the related packages have been > installed? Some (re)labeling takes place at startup (tmp directories and > such), and one user already confirmed the issue to occur not before the > first reboot (which can be explained by the labeling on boot). So maybe your > initial labeling on boot has not yet been affected by the 40.17-1 policies. > Just a possibility. It's indeed a case that keeps me curious, too :) Yes, I had rebooted the system.
FEDORA-2024-f53eab6892 has been pushed to the Fedora 40 testing repository. Soon you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2024-f53eab6892` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2024-f53eab6892 See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
Thanks Stefan! I put the bodhi page in the three topics. Hopefully, we get some feedback from people that cover all manifests soon to close the topic. If I get a feedback about all manifests of the issue, I will close the ticket. Otherwise, I would wait a few days to see how it develops when in stable.
Reconciling my SELinux rules against some of the denials from the initial bug report (on the top here) I am surprised to a) see some denials related to comm=qemu-img -> not related to swtpm but why do they appear? Does creating a VM image work? This is odd. b) see some denials related to types and operations that my new policy doesn't need at all, such as for example type=AVC msg=audit(01/05/24 09:26:02.155:584) : avc: denied { write } for pid=113794 comm=swtpm path=/run/libvirt/qemu/swtpm/1-MyVM-swtpm.pid dev="tmpfs" ino=4456 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=file permissive=0 My policy has these rules for qemu_var_run_t and works fine (and also creates PID files just fine): swtpm/src/selinux/swtpm_libvirt.te:allow virtqemud_t qemu_var_run_t:file { relabelfrom relabelto }; swtpm/src/selinux/swtpm_libvirt.te:allow virtqemud_t qemu_var_run_t:sock_file relabelfrom; I don't seem to need any labels related to virt_var_run_t: type=AVC msg=audit(1714093274.173:279): avc: denied { relabelfrom } for pid=6652 comm="rpc-virtqemud" name="1-fedora39-40-TPM-Upg" dev="tmpfs" ino=2915 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:virt_var_run_t:s0 tclass=dir permissive=1 This is strange.
It remains a strang case. I suggest to push the update to stable because several reports already suggest that it solves their issue. However, there are two further users for whom the issue remains. Bug2k24 (who reported in bodhi [1] the issue remains) is case 4 above. In ask.fedora, case 1 reports also that the issue remains. It seems the bug we have tackled here is only the one with rpc-virtqemud. The two remaining cases (1 and 4) have had each only one isolated denial (one "qemu-img" and one "swtpm"). The other user (who had "qemu-img") reports to have no longer any denial at all while SELinux remains the problem (disabled = swtpm works; enabled = swtpm doesn't work) already since the last update (so the update before this one), but I would prefer to skim a full journalctl before confirming this. When both of the remaining users have updated to this, we will first go ahead in ask.fedora and see what logs remain and then open a new bug report with reference to this one. It seems two phenomenons :( bug2k24 , please go ahead in the following ask.fedora topic: https://discussion.fedoraproject.org/t/unable-to-create-new-virt-manager-vm-with-software-tpm-on-fedora-40/114254/26 It is you and this user who have the issue even after the update. Please keep the update of BZ #2278123 installed, then reboot, and then provide in the ask.fedora topic the logs of a "dedicated boot" that at the best contains only the issue: boot your system, just provoke the issue (please also let me know the very time you provoked the issue), then wait at least 15 seconds, and then go to a terminal and provide the FULL output in ask.fedora (as a file link or in a code box) of both: `sudo journalctl -r --boot=0` `sudo ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today` I would like to compare this in ask.fedora before filing another bug report.
Supplement for bug2k24 : don't use `sudo journalctl -r --boot=0` but instead use `sudo journalctl --boot=0 --no-hostname` (no -r in this case, but with --no-hostname) - also feel free to further anonymize the logs if that is important for you (e.g., MAC/IP addresses or so).
(In reply to Christopher Klooz from comment #23) > It remains a strang case. I wonder whether some of the issues are related to stale labels? > > I suggest to push the update to stable because several reports already > suggest that it solves their issue. However, there are two further users for > whom the issue remains. I pushed it to stable now. > > Bug2k24 (who reported in bodhi [1] the issue remains) is case 4 above. > > In ask.fedora, case 1 reports also that the issue remains. [1] ``` type=AVC msg=audit(04/24/2024 14:19:15.239:260) : avc: denied { create } for pid=4518 comm=qemu-img anonclass=[io_uring] scontext=system_u:system_r:virtstoraged_t:s0 tcontext=system_u:object_r:io_uring_t:s0 tclass=anon_inode permissive=1 ``` Is this what is causing a failure for case 1? If it was then this particular issue with qemu-img is not related to swtpm. > > It seems the bug we have tackled here is only the one with rpc-virtqemud. > The two remaining cases (1 and 4) have had each only one isolated denial > (one "qemu-img" and one "swtpm"). The other user (who had "qemu-img") > reports to have no longer any denial at all while SELinux remains the > problem (disabled = swtpm works; enabled = swtpm doesn't work) already since > the last update (so the update before this one), but I would prefer to skim > a full journalctl before confirming this. Could they run a autorelabeling of their system to make sure that all labels are as expected?
(In reply to Stefan Berger from comment #25) > (In reply to Christopher Klooz from comment #23) > > It remains a strang case. > > I wonder whether some of the issues are related to stale labels? I am not sure what to expect yet, I am curious about the journals. > Is this what is causing a failure for case 1? If it was then this particular > issue with qemu-img is not related to swtpm. Indeed a realistic assumption, although the user with the current update state has no denials at all if their argument is right (my assumption is that there maybe takes place a denial much earlier that breaks something). Yet, the issue remains to occur only with swtpm, while without swtpm, VMs work even if SELinux is enabled. So the user has their issue when both "swtpm" and "SELInux" are involved. But yeah, it's not clear if that is swtpm at this time. That is why I want to further sort this out in the ask.fedora topic, and evaluate what to file a bug against. So far I see three possibilities: "swtpm", "SELinux", "qemu-img". Let's wait for some logs. > > > > > It seems the bug we have tackled here is only the one with rpc-virtqemud. > > The two remaining cases (1 and 4) have had each only one isolated denial > > (one "qemu-img" and one "swtpm"). The other user (who had "qemu-img") > > reports to have no longer any denial at all while SELinux remains the > > problem (disabled = swtpm works; enabled = swtpm doesn't work) already since > > the last update (so the update before this one), but I would prefer to skim > > a full journalctl before confirming this. > > Could they run a autorelabeling of their system to make sure that all labels > are as expected? Good point. I add that to the topic. Can only help. Thanks! I tend to close this as it seems to have solved its issue, and then let's see where the other issue is developing to.
*** Bug 2271087 has been marked as a duplicate of this bug. ***
*** Bug 2271086 has been marked as a duplicate of this bug. ***
FEDORA-2024-f53eab6892 (swtpm-0.8.1-7.fc40) has been pushed to the Fedora 40 stable repository. If problem still persists, please make note of it in this bug report.
Fresh installatin of F41, first new VM via virt-manager for Windows11 - no special setup, swtpm-0.9.0. This error is here again (maybe this time the reason is different, but symptoms are the same): virt-manager: Unable to complete install: 'internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/windows-swtpm.log' for details.' windows-swtpm.log: swtpm at /usr/bin/swtpm does not support TPM 2 Nov 29 02:47:47 fedora systemd[1]: Starting setroubleshootd.service - SETroubleshoot daemon for processing new SELinux denial logs... Nov 29 02:47:47 fedora systemd[1]: Started setroubleshootd.service - SETroubleshoot daemon for processing new SELinux denial logs. Nov 29 02:47:47 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=setroubleshootd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 29 02:47:47 fedora systemd[1]: Created slice system-dbus\x2d:1.3\x2dorg.fedoraproject.SetroubleshootPrivileged.slice - Slice /system/dbus-:1.3-org.fedoraproject.SetroubleshootPrivileged. Nov 29 02:47:47 fedora systemd[1]: Started dbus-:1.3-org.fedoraproject.SetroubleshootPrivileged. Nov 29 02:47:47 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.3-org.fedoraproject.SetroubleshootPrivileged@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=succ ess' Nov 29 02:47:48 fedora SetroubleshootPrivileged.py[2328]: failed to retrieve rpm info for path '/var/lib/selinux/targeted/active/modules/200/swtpm': Nov 29 02:47:48 fedora setroubleshoot[2318]: SELinux is preventing /usr/bin/swtpm from open access on the file /var/log/swtpm/libvirt/qemu/windows-swtpm.log. For complete SELinux messages run: sealert -l 67d3f0b5-18cf-472f-bac1-67832adb0c9c Nov 29 02:47:48 fedora setroubleshoot[2318]: SELinux is preventing /usr/bin/swtpm from open access on the file /var/log/swtpm/libvirt/qemu/windows-swtpm.log.#012#012***** Plugin restorecon (99.5 confidence) suggests ************************#012#012If you want to fix t he label. #012/var/log/swtpm/libvirt/qemu/windows-swtpm.log default label should be var_log_t.#012Then you can run restorecon. The access attempt may have been stopped due to insufficient permissions to access a parent directory in which case try to change the following command accordingly.#012Do#012# /sbin/restorecon -v /var/log/swtpm/libvirt/qemu/windows-swtpm.log#012#012***** Plugin catchall (1.49 confidence) suggests **************************#012#012If you believe that swtpm should be allowed open access on the windows-swtpm.log file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'swtpm' --raw | audit2allow -M my-swtpm#012# semodule -X 300 -i my-swtpm.pp#012 Nov 29 02:47:48 fedora setroubleshoot[2318]: SELinux is preventing /usr/sbin/virtqemud from relabelfrom access on the file windows-swtpm.log. For complete SELinux messages run: sealert -l adb2c9c3-0ac0-41fc-bbd7-0a566d86d058 Nov 29 02:47:48 fedora setroubleshoot[2318]: SELinux is preventing /usr/sbin/virtqemud from relabelfrom access on the file windows-swtpm.log.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that virtqemud should be allowed relabelfrom access on the windows-swtpm.log file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'rpc-virtqemud' --raw | aud it2allow -M my-rpcvirtqemud#012# semodule -X 300 -i my-rpcvirtqemud.pp#012 Nov 29 02:47:58 fedora systemd[1]: dbus-:1.3-org.fedoraproject.SetroubleshootPrivileged: Deactivated successfully. Nov 29 02:47:58 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.3-org.fedoraproject.SetroubleshootPrivileged@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=succe ss' Nov 29 02:47:58 fedora systemd[1]: setroubleshootd.service: Deactivated successfully. Nov 29 02:47:58 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=setroubleshootd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 29 02:47:58 fedora systemd[1]: setroubleshootd.service: Consumed 359ms CPU time, 81.3M memory peak.
(In reply to Adam Pribyl from comment #30) > Fresh installatin of F41, first new VM via virt-manager for Windows11 - no > special setup, swtpm-0.9.0. This error is here again (maybe this time the > reason is different, but symptoms are the same): > > virt-manager: Unable to complete install: 'internal error: Could not run > '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log > '/var/log/swtpm/libvirt/qemu/windows-swtpm.log' for details.' Can you show the command line you used? > Nov 29 02:47:48 fedora SetroubleshootPrivileged.py[2328]: failed to retrieve > rpm info for path '/var/lib/selinux/targeted/active/modules/200/swtpm': > Nov 29 02:47:48 fedora setroubleshoot[2318]: SELinux is preventing > /usr/bin/swtpm from open access on the file > /var/log/swtpm/libvirt/qemu/windows-swtpm.log. For complete SELinux messages > run: sealert -l 67d3f0b5-18cf-472f-bac1-67832adb0c9c I need to know what the SELinux denials are. Can you either run this command sealert -l 67d3f0b5-18cf-472f-bac1-67832adb0c9c or show the last few entries from the audit log?
I cannot recreate the issue on a fresh install of Fedora 41 on x86_64 trying to install an aarch64 VM:: virt-install --import --name fedora-aarch64 --osinfo fedora40 --arch aarch64 --vcpus 4 --ram 1024 --cdrom /var/lib/libvirt/images/Fedora-Everything-netinst-aarch64-41-1.4.iso --disk path=/var/lib/libvirt/images/Fedora-Minimal-41-aarch64.raw --network default --graphics none(In reply to Stefan Berger from comment #31) > (In reply to Adam Pribyl from comment #30) > > Fresh installatin of F41, first new VM via virt-manager for Windows11 - no > > special setup, swtpm-0.9.0. This error is here again (maybe this time the > > reason is different, but symptoms are the same): > > > > virt-manager: Unable to complete install: 'internal error: Could not run > > '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log > > '/var/log/swtpm/libvirt/qemu/windows-swtpm.log' for details.' > > Can you show the command line you used? FYI: I cannot recreate the issue on a fresh install of Fedora 41 on x86_64 trying to install an aarch64 VM: virt-install --import --name fedora-aarch64 --osinfo fedora40 --arch aarch64 --vcpus 4 --ram 1024 --cdrom /var/lib/libvirt/images/Fedora-Everything-netinst-aarch64-41-1.4.iso --disk path=/var/lib/libvirt/images/Fedora-Minimal-41-aarch64.raw --network default --graphics none The swtpm log file was created: # cat /var/log/swtpm/libvirt/qemu/fedora-aarch64-swtpm.log Starting vTPM manufacturing as tss:tss @ Sun 01 Dec 2024 09:21:04 AM EST Successfully created RSA 2048 EK with handle 0x81010001. Invoking /usr/bin/swtpm_localca --type ek --ek b31ed0f22ca2350269e0f745eea185d925bd249d02a4f3ca5b0b2b8b919201d4d667b10804fd2a7e89894cacb9f706420502434c51e5858f8480f244c84e5bede39f3def7298f8a3c3c6faf03c5058518f92d4409b2d9adfa313c12b5d4b6faa86e1f4561bcd0f290633063e7a0f36c9cdde1c8983c24bf1fc5a1a14c5358c1263e7c27df9d09f9b168a51c3c4a89803ea14864052fd3e9221e951071b26f00c75bba689ce39b9afb60f213cc396ba4bc10b778f9c42c35cc991d3bb1e03a4ea955248424df7caff7f6ddc7e239a2d5efea56f93db55a07d61491636a8f9dbba89050a8c9814f775220294242f740e486a9a202e540090d725eeb29fefac9155 --dir /tmp/swtpm_setup.certs.NC5QX2 --logfile /var/log/swtpm/libvirt/qemu/fedora-aarch64-swtpm.log --vmid fedora-aarch64:4adecc57-b75a-4e2e-ba96-c9b642f989bd --tpm-spec-family 2.0 --tpm-spec-level 0 --tpm-spec-revision 164 --tpm-manufacturer id:00001014 --tpm-model swtpm --tpm-version id:20191023 --tpm2 --configfile /etc/swtpm-localca.conf --optsfile /etc/swtpm-localca.options [...] # rpm -q -a | grep swtpm swtpm-libs-0.9.0-4.fc41.x86_64 swtpm-0.9.0-4.fc41.x86_64 swtpm-selinux-0.9.0-4.fc41.noarch swtpm-tools-0.9.0-4.fc41.x86_64 # rpm -q -a | grep target selinux-policy-targeted-41.26-1.fc41.noarch
Created attachment 2060765 [details] windows guest I did not used the ClI but GUI to create the guest. Attached is the resulting XML. I the meantime to move forward I disable selinux and interesting enought - the sealert prints nothing, just sits on commandline, no output. Steps to create a VM Create a new VM in virt-manager Manual install Operating system: select Windows 11 Set memory and CPUs Select "Select or create custom storage" and input "/dev/sdb" in the text box (!preexisting guest!) Check "customize configuration before install" Chipset: select Q35 Firmware: select OVMF_CODE.secboot.fd Select the disk and set it to SATA, cache mode: none, discard mode: unmap As for the audit.log here may be the relevant output: type=VIRT_MACHINE_ID msg=audit(1732866464.972:313): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 vm-ctx=system_u:system_r:svirt_t:s0:c555,c841 img-ctx=system_u:object_r:svirt_image_t:s0:c555,c841 model=selinux exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=VIRT_MACHINE_ID msg=audit(1732866464.972:314): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 vm-ctx=+107:+107 img-ctx=+107:+107 model=dac exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=BPF msg=audit(1732866465.019:315): prog-id=97 op=LOAD type=SERVICE_START msg=audit(1732866465.079:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=virtlogd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=AVC msg=audit(1732866465.084:317): avc: denied { open } for pid=2315 comm="swtpm" path="/var/log/swtpm/libvirt/qemu/windows-swtpm.log" dev="dm-0" ino=2359688 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:virt_log_t:s0 tclass=file permissive=0 type=SYSCALL msg=audit(1732866465.084:317): arch=c000003e syscall=257 success=no exit=-13 a0=ffffff9c a1=55fde50b95c0 a2=20441 a3=180 items=0 ppid=2314 pid=2315 auid=4294967295 uid=59 gid=59 euid=59 suid=59 fsuid=59 egid=59 sgid=59 fsgid=59 tty=(none) ses=4294967295 comm="swtpm" exe="/usr/bin/swtpm" subj=system_u:system_r:swtpm_t:s0 key=(null).ARCH=x86_64 SYSCALL=openat AUID="unset" UID="tss" GID="tss" EUID="tss" SUID="tss" FSUID="tss" EGID="tss" SGID="tss" FSGID="tss" type=PROCTITLE msg=audit(1732866465.084:317): proctitle=2F7573722F62696E2F737774706D00736F636B6574002D2D7072696E742D6361706162696C6974696573002D2D6C6F670066696C653D2F7661722F6C6F672F737774706D2F6C6962766972742F71656D752F77696E646F77732D737774706D2E6C6F67 type=AVC msg=audit(1732866465.092:318): avc: denied { relabelfrom } for pid=2316 comm="rpc-virtqemud" name="windows-swtpm.log" dev="dm-0" ino=2359688 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:virt_log_t:s0 tclass=file permissive=1 type=SYSCALL msg=audit(1732866465.092:318): arch=c000003e syscall=188 success=yes exit=0 a0=55b7f19d2150 a1=7f4cee6c0197 a2=7f4cc804c250 a3=1f items=0 ppid=1866 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc-virtqemud" exe="/usr/sbin/virtqemud" subj=system_u:system_r:virtqemud_t:s0 key=(null).ARCH=x86_64 SYSCALL=setxattr AUID="unset" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root" type=PROCTITLE msg=audit(1732866465.092:318): proctitle=2F7573722F7362696E2F7669727471656D7564002D2D74696D656F757400313230 type=VIRT_RESOURCE msg=audit(1732866465.116:319): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=disk reason=start vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 old-disk="?" new-disk="/dev/sdb" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=VIRT_RESOURCE msg=audit(1732866465.116:320): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=net reason=start vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 old-net="?" new-net="52:54:00:19:fd:f1" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=VIRT_RESOURCE msg=audit(1732866465.117:321): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=dev reason=start vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=VIRT_RESOURCE msg=audit(1732866465.117:322): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=dev reason=start vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=VIRT_RESOURCE msg=audit(1732866465.117:323): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=tpm-emulator reason=start vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 device="/run/libvirt/qemu/swtpm/1-windows-swtpm.sock" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=VIRT_RESOURCE msg=audit(1732866465.117:324): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=mem reason=start vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 old-mem=0 new-mem=8388608 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=VIRT_RESOURCE msg=audit(1732866465.117:325): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm resrc=vcpu reason=start vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 old-vcpu=0 new-vcpu=4 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success'.UID="root" AUID="unset" type=VIRT_CONTROL msg=audit(1732866465.117:326): pid=1866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtqemud_t:s0 msg='virt=kvm op=start reason=booted vm="windows" uuid=e20f703e-2a6a-44a7-8250-1bb7cf33f505 vm-pid=0 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=failed'.UID="root" AUID="unset"
(In reply to Adam Pribyl from comment #33) > Created attachment 2060765 [details] > windows guest > > I did not used the ClI but GUI to create the guest. Attached is the > resulting XML. I do not think that it makes a difference whether the VM is created with virt-manager or from XML directly with libvirt using virsh. However, I could not recreate the issue with a VM created directly from XML nor with one created with virt-manager with similar configuration as yours (TPM CRB and UEFI). > I the meantime to move forward I disable selinux and interesting enought - > the sealert prints nothing, just sits on commandline, no output. > > Steps to create a VM > Create a new VM in virt-manager > Manual install > Operating system: select Windows 11 > Set memory and CPUs > Select "Select or create custom storage" and input "/dev/sdb" in the > text box (!preexisting guest!) > Check "customize configuration before install" > Chipset: select Q35 > Firmware: select OVMF_CODE.secboot.fd > Select the disk and set it to SATA, cache mode: none, discard mode: unmap The relevant denials on your machine are: type=AVC msg=audit(1732866465.084:317): avc: denied { open } for pid=2315 comm="swtpm" path="/var/log/swtpm/libvirt/qemu/windows-swtpm.log" dev="dm-0" ino=2359688 scontext=system_u:system_r:swtpm_t:s0 tcontext=system_u:object_r:virt_log_t:s0 tclass=file permissive=0 My VMs' log files are labeled also with this label and I would assume that they have this labeled the moment [root@fedora ~]# ls -lZ /var/log/libvirt/qemu/ total 20 -rw-------. 1 root root system_u:object_r:virt_log_t:s0 7024 Dec 2 09:04 fedora40.log -rw-------. 1 root root system_u:object_r:virt_log_t:s0 12140 Dec 2 07:56 test.log I do not know why I am not hitting this denial. I also removed the state directory of the TPM so that it would recreate the initial state and again open the log file but no error occurred. type=AVC msg=audit(1732866465.092:318): avc: denied { relabelfrom } for pid=2316 comm="rpc-virtqemud" name="windows-swtpm.log" dev="dm-0" ino=2359688 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:virt_log_t:s0 tclass=file permissive=1 I do not see this one, either.
I am not sure either, the directory seems to have the correct context now # ls -lZ /var/log/libvirt/qemu/ total 48 -rw-------. 1 root root system_u:object_r:virt_log_t:s0 44882 Dec 2 09:51 windows.log I just am confused why this relabelfrom was not allowed - it seems to me like the /var/log/libvirt/qemu is previously created by a different process (dnf/install from live CD?) # rpm -qf /var/log/libvirt/qemu/ libvirt-daemon-driver-qemu-10.6.0-5.fc41.x86_64 with incorrect context virtqemud_t and then it was not possible to re-label it to virt_log_t. If you are not able to reproduce let's close it and wait if somebody else hits this..
(In reply to Adam Pribyl from comment #35) > I am not sure either, the directory seems to have the correct context now > > # ls -lZ /var/log/libvirt/qemu/ > total 48 > -rw-------. 1 root root system_u:object_r:virt_log_t:s0 44882 Dec 2 09:51 > windows.log > I had looked at the wrong directory -- though it doesn't make a difference that I did not hit the denial: [root@fedora ~]# ls -lZ /var/log/swtpm/libvirt/qemu/ total 8 -rw-r--r--. 1 tss tss system_u:object_r:virt_log_t:s0 6376 Dec 2 09:07 fedora40-swtpm.log > I just am confused why this relabelfrom was not allowed - it seems to me > like the /var/log/libvirt/qemu is previously created by a different process > (dnf/install from live CD?) Uuuh, maybe that makes a difference if one installs from a live CD rather the normal installation ISO... > # rpm -qf /var/log/libvirt/qemu/ > libvirt-daemon-driver-qemu-10.6.0-5.fc41.x86_64 > > with incorrect context virtqemud_t and then it was not possible to re-label > it to virt_log_t. > > If you are not able to reproduce let's close it and wait if somebody else > hits this.. Does it still occur when you create new VMs now?
Hi folks, allow me to append my story. How I landed here? Well, recently I wanted to test the current Serpent OS on my Fedora laptop. After my ISO/Boxes adventure (https://github.com/orgs/serpent-os/discussions/4#discussioncomment-11571310), I hoped the alternative (https://github.com/serpent-os/img-tests?tab=readme-ov-file#create-virtiofs-based-virt-manager-vm-install) would satisfy my curiosity. Thus, now here I am with the outputs: [dacbarbos@vivobookone virt-manager-vm]$ ll -Z total 28 -rwxr-xr-x. 1 dacbarbos dacbarbos unconfined_u:object_r:user_home_t:s0 2648 Dec 13 15:07 create-virtio-vm.sh -rw-r--r--. 1 dacbarbos dacbarbos unconfined_u:object_r:user_home_t:s0 35 Dec 9 13:19 pkglist lrwxrwxrwx. 1 dacbarbos dacbarbos unconfined_u:object_r:user_home_t:s0 15 Dec 9 13:19 pkglist-base -> ../pkglist-base -rw-r--r--. 1 dacbarbos dacbarbos unconfined_u:object_r:user_home_t:s0 7889 Dec 9 13:19 serpentos.tmpl -rw-r--r--. 1 dacbarbos dacbarbos unconfined_u:object_r:user_home_t:s0 8070 Dec 13 16:35 serpentos.xml drwxr-xr-x. 1 root root unconfined_u:object_r:user_home_t:s0 96 Dec 13 16:35 sosroot [dacbarbos@vivobookone virt-manager-vm]$ ll -Z sosroot/ total 20 lrwxrwxrwx. 1 root root unconfined_u:object_r:user_home_t:s0 7 Dec 13 16:35 bin -> usr/bin drwxr-xr-x. 1 root root unconfined_u:object_r:user_home_t:s0 314 Dec 13 16:35 etc drwxr-xr-x. 1 root root unconfined_u:object_r:user_home_t:s0 18 Dec 13 16:31 home lrwxrwxrwx. 1 root root unconfined_u:object_r:user_home_t:s0 7 Dec 13 16:35 lib -> usr/lib lrwxrwxrwx. 1 root root unconfined_u:object_r:user_home_t:s0 9 Dec 13 16:35 lib32 -> usr/lib32 lrwxrwxrwx. 1 root root unconfined_u:object_r:user_home_t:s0 7 Dec 13 16:35 lib64 -> usr/lib drwxr-xr-x. 1 root root unconfined_u:object_r:user_home_t:s0 0 Dec 13 16:31 proc drwxr-xr-x. 1 root root unconfined_u:object_r:user_home_t:s0 0 Dec 13 16:31 run lrwxrwxrwx. 1 root root unconfined_u:object_r:user_home_t:s0 8 Dec 13 16:35 sbin -> usr/sbin drwxr-xr-x. 1 root root unconfined_u:object_r:user_home_t:s0 0 Dec 13 16:31 sys drwxr-xr-x. 1 root root unconfined_u:object_r:user_home_t:s0 92 Dec 13 16:35 usr drwxr-xr-x. 1 root root unconfined_u:object_r:user_home_t:s0 10 Dec 13 16:31 var [dacbarbos@vivobookone virt-manager-vm]$ [dacbarbos@vivobookone virt-manager-vm]$ sudo ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today ---- type=AVC msg=audit(12/16/2024 08:27:24.097:3232) : avc: denied { write } for pid=160079 comm=rpc-virtqemud name=7127499a2fa59dafb524cb597559b63a dev="sda3" ino=13152285 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:27:24.098:3233) : avc: denied { setattr } for pid=160079 comm=rpc-virtqemud name=vmlinuz dev="sda3" ino=13152288 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:27:24.105:3234) : avc: denied { read write } for pid=160080 comm=rpc-virtqemud name=renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:27:24.105:3235) : avc: denied { open } for pid=160080 comm=rpc-virtqemud path=/dev/dri/renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:27:24.105:3236) : avc: denied { lock } for pid=160080 comm=rpc-virtqemud path=/dev/dri/renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:27:24.105:3237) : avc: denied { setattr } for pid=160080 comm=rpc-virtqemud name=renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:40:28.344:3327) : avc: denied { write } for pid=161513 comm=rpc-virtqemud name=7127499a2fa59dafb524cb597559b63a dev="sda3" ino=13152285 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:40:28.345:3328) : avc: denied { setattr } for pid=161513 comm=rpc-virtqemud name=vmlinuz dev="sda3" ino=13152288 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:40:28.353:3330) : avc: denied { read write } for pid=161515 comm=rpc-virtqemud name=renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:40:28.353:3331) : avc: denied { open } for pid=161515 comm=rpc-virtqemud path=/dev/dri/renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:40:28.353:3332) : avc: denied { lock } for pid=161515 comm=rpc-virtqemud path=/dev/dri/renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:40:28.353:3333) : avc: denied { setattr } for pid=161515 comm=rpc-virtqemud name=renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:40:29.305:3335) : avc: denied { write } for pid=161556 comm=rpc-virtqemud name=7127499a2fa59dafb524cb597559b63a dev="sda3" ino=13152285 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:40:29.306:3336) : avc: denied { setattr } for pid=161556 comm=rpc-virtqemud name=vmlinuz dev="sda3" ino=13152288 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:41:06.754:3374) : avc: denied { write } for pid=161702 comm=rpc-virtqemud name=7127499a2fa59dafb524cb597559b63a dev="sda3" ino=13152285 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:41:06.754:3375) : avc: denied { setattr } for pid=161702 comm=rpc-virtqemud name=vmlinuz dev="sda3" ino=13152288 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:41:06.758:3376) : avc: denied { read write } for pid=161703 comm=rpc-virtqemud name=renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:41:06.758:3377) : avc: denied { open } for pid=161703 comm=rpc-virtqemud path=/dev/dri/renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:41:06.759:3378) : avc: denied { lock } for pid=161703 comm=rpc-virtqemud path=/dev/dri/renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:41:06.759:3379) : avc: denied { setattr } for pid=161703 comm=rpc-virtqemud name=renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:41:07.166:3381) : avc: denied { write } for pid=161742 comm=rpc-virtqemud name=7127499a2fa59dafb524cb597559b63a dev="sda3" ino=13152285 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 08:41:07.167:3382) : avc: denied { setattr } for pid=161742 comm=rpc-virtqemud name=vmlinuz dev="sda3" ino=13152288 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 09:23:37.277:3568) : avc: denied { write } for pid=164543 comm=rpc-virtqemud name=7127499a2fa59dafb524cb597559b63a dev="sda3" ino=13152285 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 09:23:37.278:3569) : avc: denied { setattr } for pid=164543 comm=rpc-virtqemud name=vmlinuz dev="sda3" ino=13152288 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 09:23:37.285:3570) : avc: denied { read write } for pid=164546 comm=rpc-virtqemud name=renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 09:23:37.285:3571) : avc: denied { open } for pid=164546 comm=rpc-virtqemud path=/dev/dri/renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 09:23:37.285:3572) : avc: denied { lock } for pid=164546 comm=rpc-virtqemud path=/dev/dri/renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 09:23:37.285:3573) : avc: denied { setattr } for pid=164546 comm=rpc-virtqemud name=renderD128 dev="tmpfs" ino=8 scontext=system_u:system_r:virtqemud_t:s0 tcontext=system_u:object_r:dri_device_t:s0 tclass=chr_file permissive=1 ---- type=AVC msg=audit(12/16/2024 09:23:38.014:3575) : avc: denied { write } for pid=164603 comm=rpc-virtqemud name=7127499a2fa59dafb524cb597559b63a dev="sda3" ino=13152285 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 ---- type=AVC msg=audit(12/16/2024 09:23:38.014:3576) : avc: denied { setattr } for pid=164603 comm=rpc-virtqemud name=vmlinuz dev="sda3" ino=13152288 scontext=system_u:system_r:virtqemud_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1 [dacbarbos@vivobookone virt-manager-vm]$ uname -r 6.11.11-300.fc41.x86_64 [dacbarbos@vivobookone virt-manager-vm]$ sudo ls -lZ /var/log/libvirt/qemu/ [sudo] password for dacbarbos: total 52 -rw-------. 1 root root system_u:object_r:virt_log_t:s0 1631 Dec 16 09:23 serpentvm-fs0-virtiofsd.log -rw-------. 1 root root system_u:object_r:virt_log_t:s0 46690 Dec 16 09:23 serpentvm.log [dacbarbos@vivobookone virt-manager-vm]$ [dacbarbos@vivobookone virt-manager-vm]$ sudo tail /var/log/libvirt/qemu/serpentvm.log [sudo] password for dacbarbos: -chardev spicevmc,id=charredir1,name=usbredir \ -device '{"driver":"usb-redir","chardev":"charredir1","id":"redir1","bus":"usb.0","port":"3"}' \ -device '{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.5","addr":"0x0"}' \ -object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' \ -device '{"driver":"virtio-rng-pci","rng":"objrng0","id":"rng0","bus":"pci.6","addr":"0x0"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/1 (label charserial0) qemu: could not open kernel file '/home/dacbarbos/github.com/serpent-os/img-tests/virt-manager-vm/sosroot/usr/lib/kernel/current.kvm.kernel': Permission denied 2024-12-16 07:23:37.802+0000: shutting down, reason=failed [dacbarbos@vivobookone virt-manager-vm]$ [dacbarbos@vivobookone virt-manager-vm]$ dnf info swtpm-selinux |grep -i -C4 installed Updating and loading repositories: keybase 100% | 7.1 KiB/s | 3.5 KiB | 00m00s Repositories loaded. Installed packages Name : swtpm-selinux Epoch : 0 Version : 0.9.0 Release : 4.fc41 Architecture : noarch Installed size : 250.7 KiB Source : swtpm-0.9.0-4.fc41.src.rpm From repository : fedora Summary : SELinux security policy for swtpm URL : https://github.com/stefanberger/swtpm [dacbarbos@vivobookone virt-manager-vm]$ [dacbarbos@vivobookone virt-manager-vm]$ dnf info selinux-policy |grep -i -C4 installed Updating and loading repositories: keybase 100% | 11.8 KiB/s | 3.5 KiB | 00m00s Repositories loaded. Installed packages Name : selinux-policy Epoch : 0 Version : 41.26 Release : 1.fc41 Architecture : noarch Installed size : 31.4 KiB Source : selinux-policy-41.26-1.fc41.src.rpm From repository : <unknown> Summary : SELinux policy configuration URL : https://github.com/fedora-selinux/selinux-policy [dacbarbos@vivobookone virt-manager-vm]$ NB: chown-ing sosroot folder to either "dacbarbos:dacbarbos" or "qemu:qemu" made no difference. VM's XML definition: <domain type="kvm"> <name>serpentvm</name> <uuid>2059f25e-dac4-4049-967b-6641c6459e8d</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://libosinfo.org/linux/2022"/> </libosinfo:libosinfo> </metadata> <memory unit="KiB">4194304</memory> <currentMemory unit="KiB">4194304</currentMemory> <memoryBacking> <source type="memfd"/> <access mode="shared"/> </memoryBacking> <vcpu placement="static">4</vcpu> <os> <type arch="x86_64" machine="pc-q35-9.1">hvm</type> <loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE.fd</loader> <nvram template="/usr/share/OVMF/OVMF_VARS.fd">/var/lib/libvirt/qemu/nvram/serpentvm_VARS.fd</nvram> <kernel>/home/dacbarbos/github.com/serpent-os/img-tests/virt-manager-vm/sosroot/usr/lib/kernel/current.kvm.kernel</kernel> <initrd>/home/dacbarbos/github.com/serpent-os/img-tests/virt-manager-vm/sosroot/usr/lib/kernel/current.kvm.initrd</initrd> <cmdline>rootfstype=virtiofs root=root rw rd.modules-load=virtio_pci rd.shell vconsole.font=ter-v32n video=Virtual-1:1440x900MR</cmdline> <boot dev="hd"/> </os> <features> <acpi/> <apic/> <vmport state="off"/> </features> <cpu mode="host-passthrough" check="none" migratable="on"/> <clock offset="utc"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> <timer name="hpet" present="no"/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled="no"/> <suspend-to-disk enabled="no"/> </pm> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <controller type="usb" index="0" model="qemu-xhci" ports="15"> <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/> </controller> <controller type="pci" index="0" model="pcie-root"/> <controller type="pci" index="1" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="1" port="0x10"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/> </controller> <controller type="pci" index="2" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="2" port="0x11"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/> </controller> <controller type="pci" index="3" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="3" port="0x12"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/> </controller> <controller type="pci" index="4" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="4" port="0x13"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/> </controller> <controller type="pci" index="5" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="5" port="0x14"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/> </controller> <controller type="pci" index="6" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="6" port="0x15"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/> </controller> <controller type="pci" index="7" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="7" port="0x16"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/> </controller> <controller type="pci" index="8" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="8" port="0x17"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/> </controller> <controller type="pci" index="9" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="9" port="0x18"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/> </controller> <controller type="pci" index="10" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="10" port="0x19"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/> </controller> <controller type="pci" index="11" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="11" port="0x1a"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/> </controller> <controller type="pci" index="12" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="12" port="0x1b"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/> </controller> <controller type="pci" index="13" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="13" port="0x1c"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/> </controller> <controller type="pci" index="14" model="pcie-root-port"> <model name="pcie-root-port"/> <target chassis="14" port="0x1d"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/> </controller> <controller type="sata" index="0"> <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/> </controller> <controller type="virtio-serial" index="0"> <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/> </controller> <filesystem type="mount" accessmode="passthrough"> <driver type="virtiofs"/> <source dir="/home/dacbarbos/github.com/serpent-os/img-tests/virt-manager-vm/sosroot"/> <target dir="root"/> <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/> </filesystem> <interface type="network"> <mac address="52:54:00:eb:5e:dc"/> <source network="default"/> <model type="virtio"/> <address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/> </interface> <serial type="pty"> <target type="isa-serial" port="0"> <model name="isa-serial"/> </target> </serial> <console type="pty"> <target type="serial" port="0"/> </console> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <address type="virtio-serial" controller="0" bus="0" port="1"/> </channel> <channel type="spicevmc"> <target type="virtio" name="com.redhat.spice.0"/> <address type="virtio-serial" controller="0" bus="0" port="2"/> </channel> <input type="tablet" bus="usb"> <address type="usb" bus="0" port="1"/> </input> <input type="mouse" bus="ps2"/> <input type="keyboard" bus="ps2"/> <graphics type="spice"> <listen type="none"/> <image compression="off"/> <gl enable="yes"/> </graphics> <sound model="ich9"> <address type="pci" domain="0x0000" bus="0x00" slot="0x1b" function="0x0"/> </sound> <audio id="1" type="spice"/> <video> <model type="virtio" heads="1" primary="yes"> <acceleration accel3d="yes"/> </model> <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/> </video> <redirdev bus="usb" type="spicevmc"> <address type="usb" bus="0" port="2"/> </redirdev> <redirdev bus="usb" type="spicevmc"> <address type="usb" bus="0" port="3"/> </redirdev> <watchdog model="itco" action="reset"/> <memballoon model="virtio"> <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/> </memballoon> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/> </rng> </devices> </domain>
It does not look like you problem is directly related to SWTPM. At least I do not seee any avc with swtpm. But to be honest - I am also not able to run my VM using virtio with selinux, so I gave up and left selinux disabled.
Hello, It seems the problem I have is the same / similar: Steps to reproduce: - Install Fedora server 41 on bare metal using "Fedora-Server-netinst-x86_64-41-1.4.iso" - Log-in to server cockpit GUI "192.168.x.x:9090" - Install cockpit-machines - reboot - Log-in to server cockpit GUI "192.168.x.x:9090" - go to terminal tab - cd /var/lib/libvirt/images - sudo wget https://download.fedoraproject.org/pub/fedora/linux/releases/41/Server/x86_64/iso/Fedora-Server-netinst-x86_64-41-1.4.iso - sudo wget https://download.truenas.com/TrueNAS-SCALE-ElectricEel/24.10.0.2/TrueNAS-SCALE-24.10.0.2.iso - go to virtual machines tab - click "Create VM": - Name: fedora-server-41-base - Installation type: "Local install media (ISO image or distro install tree" - Installation source: /var/lib/libvirt/images/Fedora-Server-netinst-x86_64-41-1.4.iso - Operating system: "Fedora Linux 41" - Storage: Create new qcow2 volume - Storage limit: 20GB - Memory: 4GB - click "Create and edit" - in Overview, Firmware: select UEFI and save - click install Expected result: - VM boots into fedora installer Actual result: - Error message: VM fedora-server-41-base failed to get installed ERROR internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; Check error log '/var/log/swtpm/libvirt/qemu/fedora-server-41-base-swtpm.log' for details. Domain installation does not appear to have been successful. If it was, you can restart your domain by running: virsh --connect qemu:///system start fedora-server-41-base otherwise, please restart your installation. Command '['virt-install', '--connect', 'qemu:///system', '--quiet', '--os-variant', 'fedora41', '--reinstall', 'fedora-server-41-base', '--wait', '-1', '--noautoconsole', '--cdrom', '/var/lib/libvirt/images/Fedora-Server-netinst-x86_64-41-1.4.iso']' returned non-zero exit status 1. - $ sudo cat /var/log/swtpm/libvirt/qemu/fedora-server-41-base-swtpm.log swtpm at /usr/bin/swtpm does not support TPM 2 - SELinux is preventing swtpm from open access on the file /var/log/swtpm/libvirt/qemu/fedora-server-41-base-swtpm.log. If I do the same steps with BIOS firmware setting, I can install fedora 41 server VM and TrueNAS SCALE VM without hickups. If I do the same steps under fedora server 42, installing a fedora 41 server VM in UEFI works, but the TrueNAS SCALE 24.10.0.2 still fails. I've tried fedora 42 for this because I stumbled upon theses similar issues, that got me thinking that libvirt 11.0.0 (present in f42 but not f41) could have solved the problem, but it does not completely: - https://github.com/virt-manager/virt-manager/issues/819 - https://issues.redhat.com/browse/RHEL-69774 - https://gitlab.com/libvirt/libvirt/-/commit/81da7a2c2a2d490cddaaa77d3e3b36e210b38bd7 with help on Ask-fedora and applying the proposed solutions from SELinux tab in cockpit i could get past the error message, but then the VM starts and is not able to find the .iso file.
(In reply to nh from comment #39) > Hello, > > It seems the problem I have is the same / similar: > > Steps to reproduce: > - Install Fedora server 41 on bare metal using > "Fedora-Server-netinst-x86_64-41-1.4.iso" > - Log-in to server cockpit GUI "192.168.x.x:9090" > - Install cockpit-machines Thanks for the report and installation instructions. I am not normally using this environment to install VMs. > Expected result: > - VM boots into fedora installer > > Actual result: > - Error message: VM fedora-server-41-base failed to get installed > ERROR internal error: Could not run '/usr/bin/swtpm_setup'. exitstatus: 1; > Check error log > '/var/log/swtpm/libvirt/qemu/fedora-server-41-base-swtpm.log' for details. > Domain installation does not appear to have been successful. If it was, you > can restart your domain by running: virsh --connect qemu:///system start > fedora-server-41-base otherwise, please restart your installation. Command > '['virt-install', '--connect', 'qemu:///system', '--quiet', '--os-variant', > 'fedora41', '--reinstall', 'fedora-server-41-base', '--wait', '-1', > '--noautoconsole', '--cdrom', > '/var/lib/libvirt/images/Fedora-Server-netinst-x86_64-41-1.4.iso']' returned > non-zero exit status 1. The missing rule for this setup (with UEFI) seems to be: allow swtpm_t virt_log_t:file open; > > - $ sudo cat /var/log/swtpm/libvirt/qemu/fedora-server-41-base-swtpm.log > swtpm at /usr/bin/swtpm does not support TPM 2 > > - SELinux is preventing swtpm from open access on the file > /var/log/swtpm/libvirt/qemu/fedora-server-41-base-swtpm.log. > > > If I do the same steps with BIOS firmware setting, I can install fedora 41 > server VM and TrueNAS SCALE VM without hickups. > > If I do the same steps under fedora server 42, installing a fedora 41 server > VM in UEFI works, but the TrueNAS SCALE 24.10.0.2 still fails. Can you be a bit more precise about 'still fails' because the TrueNAS setup on fedora 41 server host seems to work and it's the VM using UEFI that's failing on 41 host. So what is failing on fedora server 42 host?
FEDORA-2025-1c1946f65f (swtpm-0.9.0-7.fc41) has been submitted as an update to Fedora 41. https://bodhi.fedoraproject.org/updates/FEDORA-2025-1c1946f65f
FEDORA-2025-dbfcc168d4 (swtpm-0.9.0-5.fc40) has been submitted as an update to Fedora 40. https://bodhi.fedoraproject.org/updates/FEDORA-2025-dbfcc168d4
FEDORA-2025-5253a5d614 (swtpm-0.10.0-8.fc42) has been submitted as an update to Fedora 42. https://bodhi.fedoraproject.org/updates/FEDORA-2025-5253a5d614
RPMs with the missing SELinux rules are now available for f40 - f42 and rawhide.
Please stop working in this bug report everyone. The original issue has been solved long ago. So far, I think it is three or four bugs that have been accumulated here. The more comes together, the less likely it is that affected users are led here because it is no longer comprehensible what belongs to which bug, and so a lot of people who end up here see things that do not comply to their bug, so they leave. Also, the ranking in search engines decreases with the widespread information here, so that people get less likely to end up here with each bug that is added. Finally: all people who were ever affected by any of the bugs here, get all emails. It is also incomprehensible if so many updates get linked to this ticket - you never know when you need the related information again in the aftermath (what is related to what when where or such). I hope that makes sense :) Since you already started to tackle another bug here, it might be easiest to keep that one here too. But for everything after that, please open one ticket for one bug: The one here was solved long ago. Even if a new bug has comparable symptoms, it cannot be the same. Thanks :)
FEDORA-2025-5253a5d614 has been pushed to the Fedora 42 testing repository. Soon you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2025-5253a5d614` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2025-5253a5d614 See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
FEDORA-2025-dbfcc168d4 has been pushed to the Fedora 40 testing repository. Soon you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2025-dbfcc168d4` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2025-dbfcc168d4 See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
FEDORA-2025-1c1946f65f has been pushed to the Fedora 41 testing repository. Soon you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2025-1c1946f65f` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2025-1c1946f65f See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
FEDORA-2025-1c1946f65f (swtpm-0.9.0-7.fc41) has been pushed to the Fedora 41 stable repository. If problem still persists, please make note of it in this bug report.
FEDORA-2025-dbfcc168d4 (swtpm-0.9.0-5.fc40) has been pushed to the Fedora 40 stable repository. If problem still persists, please make note of it in this bug report.
FEDORA-2025-5253a5d614 (swtpm-0.10.0-8.fc42) has been pushed to the Fedora 42 stable repository. If problem still persists, please make note of it in this bug report.