| Summary: | libvirt x86_64 0.8.7-18.el6_1.1 breaks pexe boot. | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Owen <owen.synge> |
| Component: | libvirt | Assignee: | Laine Stump <laine> |
| Status: | CLOSED WORKSFORME | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 6.0 | CC: | acathrow, dallan, dmair, dyuan, mzhan, rwu |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-10-17 14:48:01 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Owen
2011-09-30 15:06:01 UTC
Can you attach the XML config of your guest, and of the libvirt virtual network it's using? (unless it's using a direct bridged network connection). Hello: While Red Hat definitely appreciates bugs being reported to us, particularly for our community versions, the Bugzilla site is not a mechanism for Red Hat to deliver support. This issue sounds like it is very important to your business, therefore, I recommend opening a case with Red Hat's support via access.redhat.com. By going through Red Hat's support your issue will be properly prioritised and given appropriate attention by our software engineers. Unfortunately our software engineers can only work on bugs reported directly to BZ as their time permits and there is no service level agreement for anything filed via Bugzilla. Kind regards, Dave Mair Sr. Manager, Global Support Services PXE booting VMs with libvirt-0.8.7-18.el6_1.1.x86_64 works fine on my test system. Can you provide some more detail about what you're seeing, e.g., exactly where the boot fails, and your environment? BTW, just to clear up a possible misconception in the original report - testing PXE boot is not only a part of the standard test plan (I just verified this), but is an integral part of the testing and development platforms. There is something more complicated at play here - maybe an unusual config, or possibly some other package is non-standard. Dear all,
Great to hear its part of the standard testing plan.
What I am seeing is that pexeboot is started, and then aborted before even DHCP is started, then booting goes on to the next devices.
Network is set up as a bridged network, DHCP is served in another department.
Here is the XML
Regards
Owen
<domain type='kvm'>
<name>grid-vm10</name>
<uuid>09da1398-7847-cf8d-6098-6f4046a1b437</uuid>
<memory>2097152</memory>
<currentMemory>2097152</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64' machine='rhel6.1.0'>hvm</type>
<boot dev='network'/>
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/var/lib/libvirt/images/grid-vm10.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' unit='0'/>
</disk>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='bridge'>
<mac address='02:11:69:22:32:31'/>
<source bridge='bridge0'/>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes'/>
<sound model='ac97'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</sound>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
</domain>
That's strange, you're using a very similar setup to mine: bridged networking external DHCP and PXE server. My setup was originally virtio with a bridge named br0, but I changed my config to rtl8139 and bridge0, and it's still working fine for me. Are you saying that you upgraded libvirt, it does not work for you, you downgraded libvirt and it started working again? Dear Dave, Yes exactly this, for the sake of interest, and being a developer (so I have an excuse to play with new toys) I can be less cautious about upgrades. Particularly since my release testing server can be (auto) reinstalled, having a fix for 6.0 I decided to upgrade to 6.1, this also fixed the issue, and I did not need to roll back to non secure RPM's. So the issue seems to be using the libraries with 6.0 libvirt-client-0.8.7-18.el6_1.1.x86_64 libvirt-python-0.8.7-18.el6_1.1.x86_64 libvirt-0.8.7-18.el6_1.1.x86_64 but they seem to work fine with 6.1 Regards Owen Since the original reporters says the bug disappeared when he upgraded the rest of the system to 6.1, and several others report successful operation of PXE boot, I'm closing this bug. If it re-occurs, please re-open. |