Red Hat Bugzilla – Bug 249077
virt-manager installs on a non-typed disk partition, install succeed, domU fails
Last modified: 2007-11-16 20:14:58 EST
Description of problem:
When setting up a disk (/dev/cciss/c1d0) with partitions with parted, specifying
ext3 as a filesystem type results in no filesystem on the partition.
Virt-manager is fine with installing to this non-existent file system, but the
domU that is installed never even gets a VM Console to start after the install.
Version-Release number of selected component (if applicable):
I've had it happen the 3 times I've tried it.
It happens with fully virtualized domU installs. Para-virtualized domUs are
never able to start the install process, just a blank screen on the VM Console.
Steps to Reproduce:
1. partition a hard drive with parted and specify ext3 as the filesystem type
2. create a new fully virtualized domU and use the above partition
3. try to get to a first boot on the installed system.
The VM Console just gives a black screen indefinately. CPU usage for that domU
very slowly rises over time.
Either virt-install should put a file system on the partition or refuse to use
it until it has one on it.
This was done with a fully virtualized domU already running during the install.
Also, during the install of this domU, when it initially booted to elilo, I
could see fs0 and fs1. The new install elilo files were in fs1, while fs0 held
the elilo boot files for the previous (and running at the time) domU. Normally,
with installing to a partition with an ext2 (for example) file system, only fs0
Virt-manager (and python-virtinst, where the create work is done) doesn't know
anything about creating filesystems for guests or checking if they exist -- it
relies on the guest installer for that. There are many reasons you might not be
getting a console for a guest you are installing, but I doubt the filesystem, or
lack thereof, is one of them.
Can you post xend.log, xend-debug.log, and virt-manager.log from when you create
FWIW, if you are trying to get a serial console from a fully virtualized guest,
the serial console is unavailable in RHEL5.x xen. The only way to see a screen
with a fully virtualized guest is by connecting to it with vncviewer or the
virt-manager graphical console.
Created attachment 159805 [details]
Created attachment 159806 [details]
Created attachment 159808 [details]
There are a lot of installs in these files. I don't know how to isolate them
I tried this with ext2 as well and found the same issue. So I agree it's not the
file-system type. It appears that after the initial fully virtualized guest
install succeeded, any additional install fails. Unfortunately, it hosed the
initial install as well, which would not restart after shutting it down to try
I am aware that the graphical console is needed, but it never even gets to the
EFI prompt in that console. At this point in the install (first boot), vncviewer
is not an option as I need console access to start elilo. It appears that it
never starts EFI, it claims there is no console to attach to.
From your xend-debug.log:
File "/usr/lib/python2.4/site-packages/xen/xend/image.py", line 128, in createDomain
raise VmError('Kernel image does not exist: %s' % self.kernel)
VmError: Kernel image does not exist: /usr/lib/xen/boot/hvmloader
Looks like you're missing the package xen-ia64-guest-firmware, which provides
the hvmloader file, which is an emulated EFI for hvm guests, and required for
them to function. Its found on the supplemental packages cd. Make sure you have
that installed, and I think you should be in much better shape.
This was indeed a problem with the first install. I discovered the problem and
installed the rpm you mention. Continue down the log, you'll find more without
that particular issue. I probably did at least 5 or 6 installs after installing
the guest-firmware rpm.
Hm... All subsequent log info in xend-debug.log appears to be related to
paravirt guests, not hvm guests (based on the existence of messages about
pygrub, which is only for pv guests). Poking through xend.log now...
Actually, given that you're attempting to use partitions on a drive hooked to a
cciss controller, I'm curious if maybe bug 249104 is the real root cause of
problems. Are you seeing any "out of memory" messages in dmesg or messages
related to the cciss driver?
re: comment 11
Yes, I did try some para-virtual installs. They failed.
re: comment 12
Yes, I have seen these out of memory errors when dom0 boots.
Now, to confuse matters more, I have just completed a fresh install (dom0) on
the same machine (rx2660) and have successfully installed two fully virtualized
domUs to the same disk partitions I had the errors on previously. They still
don't have labels, according to parted. I did not attempt to make any domUs
before I installed the guest-firmware rpm.
I have not seen a repeat of the initial errors that prompted this bug report.
The main difference between the installs that prompted this report and the
latter installs (see comment 13) w/o problems that I can think of is I did not
allow any dom's (not even dom0) to use lvm this time. Every one is going
straight to disk (or virtual disk). This may be significant or not.
OK... can you clear xend.log and virt-manager.log (actually virt-manager.log
will clear itself on startup) and do a single hvm guest install using LVM? If
that fails, post those logs here and we'll see what we can work out -- if it
succeeds, then it sounds like we have no bug...
OK, I can do that. What's the preferred way to clear xend.log ? My guess is stop
xend, blow away xend.log and restart xend. Is there a better way ?
That should do it, yes. xend probably also clears it on startup, not 100% sure
I was wrong, xend doesn't clear its log on startup. You'd have to delete it
manually and reboot...
I reinstalled everything to try this again with lvm included. Oddly enough, I
did not experience the failures I did last week. I installed two lvm based
virtual machines and two non-lvm virtual machines. They are all running and I've
stopped and restarted them several times.
So at this point I don't know what was happening to mess these up, but I suspect
that something I was doing triggered the bug in bz#249574 since the error
messages I've seen point to missing domain id, which I'm assuming are the UUID.
Well I'm glad it is now working. We also have a fix for #249574 on the way. Thanks!