RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 987164 - guest PXE fails even though network on guest is correctly getting address from DHCP
Summary: guest PXE fails even though network on guest is correctly getting address fro...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.5
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Alex Williamson
QA Contact: Virtualization Bugs
URL:
Whiteboard: network
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-22 20:53 UTC by Allie DeVolder
Modified: 2018-12-04 15:37 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-05-12 21:22:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Allie DeVolder 2013-07-22 20:53:27 UTC
Description of problem:
Guest cannot PXE boot even though the guest is getting an address from DHCP. Contact with the PXE server is initiated, but the guest does not attempt to download the necessary files.

Version-Release number of selected component (if applicable):
20130528.0.el6_4

How reproducible:
Very

Steps to Reproduce:
1. Create RHEV environment with VLANs
2. attempt to boot new guest with PXE

Actual results:
"no filename or rootpath specified"

Expected results:
Successful boot

Additional info:
There are VLANs in use in this environment.

Comment 3 Mike Burns 2013-07-22 22:12:05 UTC
networks are setup and configured through RHEV-M, so moving this to vdsm

Comment 8 Sibiao Luo 2013-08-28 06:32:00 UTC
Hi all,

   I cann't reproduce this issue with qemu-kvm command line, it can be install guest var PXE server successfully according my testing.

I doubt that it's the PXE server configuration or DHCP resolving problem for customer, could you help double-check it and help to narrow down the scope of the problem, please paste what's your configurations and steps to reproduce it detail ? thanks in advance.

Host info:
# uname -r && rpm -q qemu-kvm
2.6.32-413.el6.x86_64
qemu-kvm-0.12.1.2-2.398.el6.x86_64

# /usr/libexec/qemu-kvm -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection -name sluo -uuid 43425b70-86e5-4664-bf2c-3b76699b8bec -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm.1,bus=virtio-serial0.0,id=port1,nr=1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm.2,bus=virtio-serial0.0,id=port2,nr=2 -drive file=/home/bug987164testing.qcow2,if=none,id=drive-system-disk,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop,serial="QEMU-DISK1" -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-system-disk,id=system-disk,bootindex=1 -device virtio-balloon-pci,id=ballooning,bus=pci.0,addr=0x5 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -netdev tap,id=hostnet0,vhost=off,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=2C:41:38:B6:40:21,bus=pci.0,addr=0x6,*bootindex=0* -k en-us -boot menu=on -qmp tcp:0:4444,server,nowait -serial unix:/tmp/ttyS0,server,nowait -vnc :1 -spice port=5931,disable-ticketing -monitor stdio

Best Regards,
sluo

Comment 9 Ademar Reis 2013-09-05 17:54:55 UTC
(In reply to Sibiao Luo from comment #8)
> Hi all,
> 
>    I cann't reproduce this issue with qemu-kvm command line, it can be
> install guest var PXE server successfully according my testing.
> 
> I doubt that it's the PXE server configuration or DHCP resolving problem for
> customer, could you help double-check it and help to narrow down the scope
> of the problem, please paste what's your configurations and steps to
> reproduce it detail ? thanks in advance.
> 

NEEDINFO(reporter)

Comment 11 Alex Williamson 2013-10-02 02:57:51 UTC
Why does the trace show duplicate packets for all bootp traffic?  Please retest with gpxe-roms-qemu-0.9.7-6.10.el6.noarch.rpm on the host system.

Comment 12 Allie DeVolder 2013-10-02 13:56:03 UTC
(In reply to Alex Williamson from comment #11)
> Why does the trace show duplicate packets for all bootp traffic?  Please
> retest with gpxe-roms-qemu-0.9.7-6.10.el6.noarch.rpm on the host system.

I couldn't find a version of gpxe-roms-qemu newer than what the customer is using: gpxe-roms-qemu-0.9.7-6.9.el6.noarch

If you can put the rpm together, I can give it to the customer.

Comment 16 Alex Williamson 2014-05-12 21:22:03 UTC
If there's any objection to closing this, please re-open.  The customer cases for this issue are closed, we've not been able to reproduce the problem, and we don't have enough information here about the configuration to make progress in reproducing.


Note You need to log in before you can comment on or make changes to this bug.