Bug 249077
Summary: | virt-manager installs on a non-typed disk partition, install succeed, domU fails | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Chuck Morrison <chuck.morrison> | ||||||||
Component: | xen | Assignee: | Hugh Brock <hbrock> | ||||||||
Status: | CLOSED NOTABUG | QA Contact: | |||||||||
Severity: | high | Docs Contact: | |||||||||
Priority: | low | ||||||||||
Version: | 5.1 | CC: | dchapman, jarod, martine.silbermann, rick.hester | ||||||||
Target Milestone: | --- | ||||||||||
Target Release: | --- | ||||||||||
Hardware: | ia64 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2007-07-25 20:37:15 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | |||||||||||
Bug Blocks: | 223107 | ||||||||||
Attachments: |
|
Description
Chuck Morrison
2007-07-20 19:12:14 UTC
Also, during the install of this domU, when it initially booted to elilo, I could see fs0 and fs1. The new install elilo files were in fs1, while fs0 held the elilo boot files for the previous (and running at the time) domU. Normally, with installing to a partition with an ext2 (for example) file system, only fs0 is visible Virt-manager (and python-virtinst, where the create work is done) doesn't know anything about creating filesystems for guests or checking if they exist -- it relies on the guest installer for that. There are many reasons you might not be getting a console for a guest you are installing, but I doubt the filesystem, or lack thereof, is one of them. Can you post xend.log, xend-debug.log, and virt-manager.log from when you create the guest? FWIW, if you are trying to get a serial console from a fully virtualized guest, the serial console is unavailable in RHEL5.x xen. The only way to see a screen with a fully virtualized guest is by connecting to it with vncviewer or the virt-manager graphical console. Created attachment 159805 [details]
xend.log
Created attachment 159806 [details]
xend-debug.log
Created attachment 159808 [details]
virt-manager.log
There are a lot of installs in these files. I don't know how to isolate them
for you.
I tried this with ext2 as well and found the same issue. So I agree it's not the file-system type. It appears that after the initial fully virtualized guest install succeeded, any additional install fails. Unfortunately, it hosed the initial install as well, which would not restart after shutting it down to try another install. I am aware that the graphical console is needed, but it never even gets to the EFI prompt in that console. At this point in the install (first boot), vncviewer is not an option as I need console access to start elilo. It appears that it never starts EFI, it claims there is no console to attach to. From your xend-debug.log: File "/usr/lib/python2.4/site-packages/xen/xend/image.py", line 128, in createDomain raise VmError('Kernel image does not exist: %s' % self.kernel) VmError: Kernel image does not exist: /usr/lib/xen/boot/hvmloader Looks like you're missing the package xen-ia64-guest-firmware, which provides the hvmloader file, which is an emulated EFI for hvm guests, and required for them to function. Its found on the supplemental packages cd. Make sure you have that installed, and I think you should be in much better shape. This was indeed a problem with the first install. I discovered the problem and installed the rpm you mention. Continue down the log, you'll find more without that particular issue. I probably did at least 5 or 6 installs after installing the guest-firmware rpm. Hm... All subsequent log info in xend-debug.log appears to be related to paravirt guests, not hvm guests (based on the existence of messages about pygrub, which is only for pv guests). Poking through xend.log now... Actually, given that you're attempting to use partitions on a drive hooked to a cciss controller, I'm curious if maybe bug 249104 is the real root cause of problems. Are you seeing any "out of memory" messages in dmesg or messages related to the cciss driver? re: comment 11 Yes, I did try some para-virtual installs. They failed. re: comment 12 Yes, I have seen these out of memory errors when dom0 boots. Now, to confuse matters more, I have just completed a fresh install (dom0) on the same machine (rx2660) and have successfully installed two fully virtualized domUs to the same disk partitions I had the errors on previously. They still don't have labels, according to parted. I did not attempt to make any domUs before I installed the guest-firmware rpm. I have not seen a repeat of the initial errors that prompted this bug report. The main difference between the installs that prompted this report and the latter installs (see comment 13) w/o problems that I can think of is I did not allow any dom's (not even dom0) to use lvm this time. Every one is going straight to disk (or virtual disk). This may be significant or not. OK... can you clear xend.log and virt-manager.log (actually virt-manager.log will clear itself on startup) and do a single hvm guest install using LVM? If that fails, post those logs here and we'll see what we can work out -- if it succeeds, then it sounds like we have no bug... Thanks! OK, I can do that. What's the preferred way to clear xend.log ? My guess is stop xend, blow away xend.log and restart xend. Is there a better way ? That should do it, yes. xend probably also clears it on startup, not 100% sure on that. I was wrong, xend doesn't clear its log on startup. You'd have to delete it manually and reboot... I reinstalled everything to try this again with lvm included. Oddly enough, I did not experience the failures I did last week. I installed two lvm based virtual machines and two non-lvm virtual machines. They are all running and I've stopped and restarted them several times. So at this point I don't know what was happening to mess these up, but I suspect that something I was doing triggered the bug in bz#249574 since the error messages I've seen point to missing domain id, which I'm assuming are the UUID. Well I'm glad it is now working. We also have a fix for #249574 on the way. Thanks! |