Description of problem: Installation of Xen DomU using a logical volume for storage fails at the very end of installation routine with an unhandled exception. VM then fails on reboot. Version-Release number of selected component (if applicable): Red Hat Enterprise Linux Client release 5.1 (Tikanga) xen-libs-3.0.3-41.el5 xen-3.0.3-41.el5 kernel-xen-2.6.18-53.1.14.el5 libvirt-python-0.2.3-9.el5_1.1 python-virtinst-0.103.0-3.el5_1.1 libvirt-0.2.3-9.el5_1.1 virt-manager-0.4.0-3.el5 How reproducible: 100% so far. Steps to Reproduce: 1. Create a volume group and logical volume. 2. Configure a new Xen VM to use the LV for its storage. 3. Proceed through the installation Actual results: At the very end of an otherwise apparently successful installation, the process aborts with the message: An unhandled exception has occurred. This is most likely a bug. Please save a copy of the detailed exception and file a bug report against anaconda at http://bugzilla.redhat.com. An attempt to reboot the VM fails with: virDomainCreate() failed POST operation failed: (xend.err "Error creating domain: Boot loader didn't return any data!") Expected results: RH5.1 installs on the VM and the VM reboots successfully. Additional info:
Created attachment 299938 [details] Exception report
Error on attempt to reboot is: Unable to start virtual machine 'libvirt.libvirtError virDomainCreate() failed POST operation failed: (xend.err "Error creating domain: Boot loader didn't return any data!") Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/console.py", line 348, in control_vm_run self.vm.startup() File "/usr/share/virt-manager/virtManager/domain.py", line 375, in startup self.vm.create() File "/usr/lib64/python2.4/site-packages/libvirt.py", line 217, in create if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) libvirtError: virDomainCreate() failed POST operation failed: (xend.err "Error creating domain: Boot loader didn't return any data!") '
There are plenty of kernel error messages at the end of your syslog that point at a hardware or driver error. The following is an excerpt showing some of these messages: <4>lost page write due to I/O error on xvda1 <3>Buffer I/O error on device xvda1, logical block 292 <4>lost page write due to I/O error on xvda1 <3>Buffer I/O error on device xvda1, logical block 293 <4>lost page write due to I/O error on xvda1 <4>end_request: I/O error, dev xvda, sector 2171 <4>end_request: I/O error, dev xvda, sector 107 <4>end_request: I/O error, dev xvda, sector 1137
Okay. I'm trying to identify the cause of the problem. I suspect that there is still a bug here somewhere (maybe in the LVM driver code?). I don't see any indication of problems with the actual storage Hardware -- I can put a filesystem on the same LV and write to it normally with out generating any errors in /var/log/messages. But when I try to use that volume as the storage device for the VM, I get the problem above consistently. Watching /var/log/messages with tail -f during an installation attempt shows the following message (and others differing only in the final numbers): Apr 7 11:26:35 RH5-3G4DKB1 kernel: raid0_make_request bug: can't convert block across chunks or bigger than 256k 507 3 Does this indicate that the bug report should be filed against the kernel? Or are there other things I should investigate? (Forgive me if I'm missing something that should be evident -- I'm not a programmer and not familiar with what I need to do to prepare a full and complete bug report).
Seems like a likely candidate for a kernel bug. I'll reassign it and see what they think.
This is probably a dup of 223947 (despite the raid10 vs raid0, it looks like the same sort of message). I'm going to close it as such. Chris Lalancette *** This bug has been marked as a duplicate of bug 223947 ***