Description of problem: I am trying to deploy a VM over PXE using virt-install but when I use UEFI with PXE, DHCP never completes. Turning off UEFI, with the same settings makes the process work immediately. The host contains the DHCP server (dnsmasq). I am using tcpdump on the newly created interface of the VM to see that only Discover, and Offer stages occur, repeatedly but not the Request or Ack stages. I am using an e1000 NIC to avoid the DHCP/UDP packets with bad checksums issue. The same thing happens if I use virtio NIC. Version-Release number of selected component (if applicable): edk2-ovmf-20221117gitfff6d81270b5-14.fc37 How reproducible: always Steps to Reproduce: 1. `sudo dnf install libvirt virt-install` 2. `sudo virt-install --boot uefi,bios.useserial=on --pxe --network network=default,model=e1000,target.dev=fedora37 --os-variant fedora37 --name fedora37 --graphics none` 3. `sudo tcpdump -i fedora37 -v` in another terminal to see DHCP packets Actual results: System says 'PXE-E16: No valid offer received' and does not try to boot. Expected results: System should get an IP and attempt to boot. Additional info: - removing `uefi,` from the above command immediately fixes the issue. ie: `sudo virt-install --boot bios.useserial=on --pxe --network network=default,model=e1000,target.dev=fedora37 --os-variant fedora37 --name fedora37 --graphics none`
We seem to see this with UEFI VM on KubeVirt as well. Paolo, who could help here?
How does your libvirt network configuration look like (i.e. 'virsh net-dumpxml default') ?
Following up on Fabian's comment, I attached the dom.xml that is problematic in our case. We are not using libvirt's network API. I can't speak for ykuksenko.
(In reply to ykuksenko from comment #0) > Description of problem: > I am trying to deploy a VM over PXE using virt-install but when I use > UEFI with PXE, DHCP never completes. Turning off UEFI, with the same > settings makes the process work immediately. The host contains the > DHCP server (dnsmasq). > 2. `sudo virt-install --boot uefi,bios.useserial=on --pxe --network > network=default,model=e1000,target.dev=fedora37 --os-variant fedora37 > --name fedora37 --graphics none` Can you confirm your "default" network looks something like this (see also Gerd's comment 2): # virsh net-dumpxml --inactive default <network> <name>default</name> <uuid>c71c33d9-96dc-4873-860c-ab525ffc72ca</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:50:a8:98'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <tftp root='/var/lib/tftpboot'/> <dhcp> <range start='192.168.122.3' end='192.168.122.254'/> <bootp file='shim.efi'/> </dhcp> </ip> </network> (Because this config works fine on my end.) > I am using tcpdump on the newly created interface of the VM to see > that only Discover, and Offer stages occur, repeatedly but not the > Request or Ack stages. I am using an e1000 NIC to avoid the DHCP/UDP > packets with bad checksums issue. The same thing happens if I use > virtio NIC. > 3. `sudo tcpdump -i fedora37 -v` in another terminal to see DHCP > packets - Can you attach your captured packets? - Any particular reason for sniffing the fedora37 interface (from "target.dev=fedora37" on the virt-install cmdline) rather than "virbr0"? > Additional info: > - removing `uefi,` from the above command immediately fixes the issue. The problem with a libvirt-managed dnsmasq for PXE boot is that libvirt doesn't let us customize the bootp/@file attribute, dependent on PXE client architecture. Meaning you can't specify "shim.efi" for UEFI guests, vs. "pxelinux.0" for BIOS guests. dnsmasq itself is capable of such a distinction (I forget the exact syntax, but a few years ago I had worked it out -- it was difficult), but libvirt doesn't expose it, AFAIK.
> The problem with a libvirt-managed dnsmasq for PXE boot is that libvirt > doesn't let us customize the bootp/@file attribute, dependent on PXE > client architecture. Meaning you can't specify "shim.efi" for UEFI > guests, vs. "pxelinux.0" for BIOS guests. libvirt got support for adding custom dnsmasq config file lines via dnsmasq xml namespace a while back, which can be used to configure this (and uefi http boot too). Here is mine: <network xmlns:dnsmasq='http://libvirt.org/schemas/network/dnsmasq/1.0'> <name>default</name> <uuid>75940bb7-ffba-47bd-97dd-a9c64e5e562f</uuid> <forward mode='route'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:c2:80:23'/> <domain name='sirius.kraxel.org' localOnly='yes'/> <ip address='192.168.105.1' netmask='255.255.255.0' localPtr='yes'> <dhcp> <range start='192.168.105.100' end='192.168.105.199'/> <bootp file='http://boot.home.kraxel.org/pxelinux.0'/> </dhcp> </ip> <dnsmasq:options> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:efi-ia32-pxe,option:client-arch,6'/> <dnsmasq:option value='dhcp-boot=tag:efi-ia32-pxe,/arch-i386/grubia32.efi,,192.168.2.10'/> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:efi-x64-pxe,option:client-arch,7'/> <dnsmasq:option value='dhcp-match=set:efi-x64-pxe,option:client-arch,9'/> <dnsmasq:option value='dhcp-boot=tag:efi-x64-pxe,/arch-x86_64/grubx64.efi,,192.168.2.10'/> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:efi-arm-pxe,option:client-arch,10'/> <dnsmasq:option value='dhcp-boot=tag:efi-arm-pxe,/arch-armhfp/grubarm.efi,,192.168.2.10'/> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:efi-aa64-pxe,option:client-arch,11'/> <dnsmasq:option value='dhcp-boot=tag:efi-aa64-pxe,/arch-aarch64/grubaa64.efi,,192.168.2.10'/> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:ppc64,option:client-arch,12'/> <dnsmasq:option value='dhcp-boot=tag:ppc64,boot/grub/powerpc-ieee1275/core.elf,,192.168.2.10'/> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:efi-ia32-http,option:client-arch,15'/> <dnsmasq:option value='dhcp-boot=tag:efi-ia32-http,http://boot.home.kraxel.org/arch-i386/grubia32.efi'/> <dnsmasq:option value='dhcp-option-force=tag:efi-ia32-http,60,HTTPClient'/> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:efi-x64-http,option:client-arch,16'/> <dnsmasq:option value='dhcp-boot=tag:efi-x64-http,http://boot.home.kraxel.org/arch-x86_64/grubx64.efi'/> <dnsmasq:option value='dhcp-option-force=tag:efi-x64-http,60,HTTPClient'/> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:efi-arm-http,option:client-arch,18'/> <dnsmasq:option value='dhcp-boot=tag:efi-arm-http,http://boot.home.kraxel.org/arch-armhfp/grubarm.efi'/> <dnsmasq:option value='dhcp-option-force=tag:efi-arm-http,60,HTTPClient'/> <dnsmasq:option value='#'/> <dnsmasq:option value='dhcp-match=set:efi-aa64-http,option:client-arch,19'/> <dnsmasq:option value='dhcp-boot=tag:efi-aa64-http,http://boot.home.kraxel.org/arch-aarch64/grubaa64.efi'/> <dnsmasq:option value='dhcp-option-force=tag:efi-aa64-http,60,HTTPClient'/> </dnsmasq:options> </network>
TIL :) A great feature; thanks for highlighting it!
This message is a reminder that Fedora Linux 37 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora Linux 37 on 2023-12-05. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a 'version' of '37'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, change the 'version' to a later Fedora Linux version. Note that the version field may be hidden. Click the "Show advanced fields" button if you do not see it. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora Linux 37 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora Linux, you are encouraged to change the 'version' to a later version prior to this bug being closed.
Fedora Linux 37 entered end-of-life (EOL) status on 2023-12-05. Fedora Linux 37 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora Linux please feel free to reopen this bug against that version. Note that the version field may be hidden. Click the "Show advanced fields" button if you do not see the version field. If you are unable to reopen this bug, please file a new report against an active release. Thank you for reporting this bug and we are sorry it could not be fixed.