Created attachment 336205 [details] screenshot of error. When doing a kickstart in kvm, using clearpart and autopart, the install fails after formatting the filesystems. It displays the attached screenshot. Various log files will be attached as well. This is repeatable.
Created attachment 336206 [details] log file
Created attachment 336207 [details] storage log
Created attachment 336208 [details] system log
Created attachment 336209 [details] program log
Created attachment 336319 [details] anaconda-logs.tgz I'm randomly seeing this failure while testing various installation scenarios in my KVM guest. Attaching anaconda-logs.tgz which contains... -rw-r--r-- root/root 34456 2009-03-23 12:22 tmp/anaconda.log -rw-r--r-- root/root 7288 2009-03-23 12:21 tmp/program.log -rw-r--r-- root/root 56988 2009-03-23 12:21 tmp/storage.log -rw-r--r-- root/root 32406 2009-03-23 12:23 tmp/syslog -rwxr-xr-x root/root 912 2009-03-23 12:19 tmp/vncserver.log
Does /dev/sda1 even exist in time?
Hard to say. By the time I notice the error and flip to tty2 to check, its there.
*** Bug 491743 has been marked as a duplicate of this bug. ***
The problem is that on a kvm, sleeping may not give us much (if any) cpu cycles to actually do the whole rescan partition table thingie, as sleep waits an amount of realtime, and in that amount of realtime, we may get very little virtual cpu. I'll attach a small C-program + a sheel script to run it, which shows this. Run the script in a virtual machine on an idle machine, (note the numbers already vary wildly, while they are quite stable on a real idle machine). Now load the host machine, using multiple processes (md5sum /dev/urandom& 10x does the trick), now watch how much cpu the kvm gets while the shell script sleeps 1 *real* second. So in short, we either need to to retry with longer timeouts, or find a way to not sleep at all (or blame the kvm guys).
Created attachment 336473 [details] C-prog used to demonstrate not getting any CPU cycles
Created attachment 336474 [details] bash script to run the program.
Note by loading the host machine, I've seen the "measureD' amount of virt cpu time given to the virtual machine in 1 second drop by as much as a factor 30. So we may need to sleep upto 30 times as long.
Should use a notification instead of assuming a certain amount of work can be accomplished in a certain wallclock period?
(In reply to comment #13) > Should use a notification instead of assuming a certain amount of work can > be accomplished in a certain wallclock period? We are using events, the problem is that the kernel inside virtual machine does not even get the time to generate the events, so the event queue is empty, so we assume the kernel is done *scanning hardware*. This is a rather hard problem, which consists mainly of the kernel <-> userspace interface for hardware scanning not given us enough info.
(In reply to comment #14) > (In reply to comment #13) > > Should use a notification instead of assuming a certain amount of work can > > be accomplished in a certain wallclock period? > > We are using events, the problem is that the kernel inside virtual machine does > not even get the time to generate the events, so the event queue is empty, so > we assume the kernel is done *scanning hardware*. This is a rather hard > problem, which consists mainly of the kernel <-> userspace interface for > hardware scanning not given us enough info. There must be some object/state visible in userspace that you can regularly poll on?
I wrote this kernel patch a long time ago to fix this problem ... http://lkml.indiana.edu/hypermail/linux/kernel/0706.1/1638.html
*** Bug 491945 has been marked as a duplicate of this bug. ***
This should be fixed in anaconda-11.5.0.37-1.
For reference, the commit was: http://git.fedorahosted.org/git/anaconda.git?p=anaconda.git;a=commitdiff;h=0bb7d5413f
This is on the preview blocker list: can any of you confirm the claimed fix? -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers
I'm not able to see it anymore.
*** Bug 504408 has been marked as a duplicate of this bug. ***