Red Hat Bugzilla – Full Text Bug Listing
|Summary:||Unable to mount filesystem: device /dev/sda1 does not exist|
|Product:||[Fedora] Fedora||Reporter:||Jesse Keating <jkeating>|
|Component:||anaconda||Assignee:||Anaconda Maintenance Team <anaconda-maint-list>|
|Status:||CLOSED RAWHIDE||QA Contact:||Fedora Extras Quality Assurance <extras-qa>|
|Version:||rawhide||CC:||anaconda-maint-list, awilliam, berrange, chris.ricker, dcantrell, dlehman, hdegoede, jlaska, markmc, mtosatti, pjones, rjones, rmaximo, vanmeeuwen+fedora, virt-maint, wwoods|
|Fixed In Version:||Doc Type:||Bug Fix|
|Doc Text:||Story Points:||---|
|Last Closed:||2009-04-23 14:10:17 EDT||Type:||---|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
|Bug Depends On:|
Description Jesse Keating 2009-03-22 13:16:37 EDT
Created attachment 336205 [details] screenshot of error. When doing a kickstart in kvm, using clearpart and autopart, the install fails after formatting the filesystems. It displays the attached screenshot. Various log files will be attached as well. This is repeatable.
Comment 5 James Laska 2009-03-23 12:24:08 EDT
Created attachment 336319 [details] anaconda-logs.tgz I'm randomly seeing this failure while testing various installation scenarios in my KVM guest. Attaching anaconda-logs.tgz which contains... -rw-r--r-- root/root 34456 2009-03-23 12:22 tmp/anaconda.log -rw-r--r-- root/root 7288 2009-03-23 12:21 tmp/program.log -rw-r--r-- root/root 56988 2009-03-23 12:21 tmp/storage.log -rw-r--r-- root/root 32406 2009-03-23 12:23 tmp/syslog -rwxr-xr-x root/root 912 2009-03-23 12:19 tmp/vncserver.log
Comment 6 Chris Lumens 2009-03-23 16:08:37 EDT
Does /dev/sda1 even exist in time?
Comment 7 Jesse Keating 2009-03-23 16:24:14 EDT
Hard to say. By the time I notice the error and flip to tty2 to check, its there.
Comment 8 Chris Lumens 2009-03-23 16:31:26 EDT
*** Bug 491743 has been marked as a duplicate of this bug. ***
Comment 9 Hans de Goede 2009-03-24 10:04:58 EDT
The problem is that on a kvm, sleeping may not give us much (if any) cpu cycles to actually do the whole rescan partition table thingie, as sleep waits an amount of realtime, and in that amount of realtime, we may get very little virtual cpu. I'll attach a small C-program + a sheel script to run it, which shows this. Run the script in a virtual machine on an idle machine, (note the numbers already vary wildly, while they are quite stable on a real idle machine). Now load the host machine, using multiple processes (md5sum /dev/urandom& 10x does the trick), now watch how much cpu the kvm gets while the shell script sleeps 1 *real* second. So in short, we either need to to retry with longer timeouts, or find a way to not sleep at all (or blame the kvm guys).
Comment 10 Hans de Goede 2009-03-24 10:05:36 EDT
Created attachment 336473 [details] C-prog used to demonstrate not getting any CPU cycles
Comment 11 Hans de Goede 2009-03-24 10:06:08 EDT
Created attachment 336474 [details] bash script to run the program.
Comment 12 Hans de Goede 2009-03-24 10:08:10 EDT
Note by loading the host machine, I've seen the "measureD' amount of virt cpu time given to the virtual machine in 1 second drop by as much as a factor 30. So we may need to sleep upto 30 times as long.
Comment 13 Marcelo Tosatti 2009-03-24 10:52:19 EDT
Should use a notification instead of assuming a certain amount of work can be accomplished in a certain wallclock period?
Comment 14 Hans de Goede 2009-03-24 11:00:30 EDT
(In reply to comment #13) > Should use a notification instead of assuming a certain amount of work can > be accomplished in a certain wallclock period? We are using events, the problem is that the kernel inside virtual machine does not even get the time to generate the events, so the event queue is empty, so we assume the kernel is done *scanning hardware*. This is a rather hard problem, which consists mainly of the kernel <-> userspace interface for hardware scanning not given us enough info.
Comment 15 Marcelo Tosatti 2009-03-24 11:08:22 EDT
(In reply to comment #14) > (In reply to comment #13) > > Should use a notification instead of assuming a certain amount of work can > > be accomplished in a certain wallclock period? > > We are using events, the problem is that the kernel inside virtual machine does > not even get the time to generate the events, so the event queue is empty, so > we assume the kernel is done *scanning hardware*. This is a rather hard > problem, which consists mainly of the kernel <-> userspace interface for > hardware scanning not given us enough info. There must be some object/state visible in userspace that you can regularly poll on?
Comment 16 Richard W.M. Jones 2009-03-24 11:47:57 EDT
I wrote this kernel patch a long time ago to fix this problem ... http://lkml.indiana.edu/hypermail/linux/kernel/0706.1/1638.html
Comment 17 Chris Lumens 2009-03-24 14:53:00 EDT
*** Bug 491945 has been marked as a duplicate of this bug. ***
Comment 18 David Lehman 2009-03-24 21:30:07 EDT
This should be fixed in anaconda-184.108.40.206-1.
Comment 19 Mark McLoughlin 2009-03-25 05:06:19 EDT
For reference, the commit was: http://git.fedorahosted.org/git/anaconda.git?p=anaconda.git;a=commitdiff;h=0bb7d5413f
Comment 20 Adam Williamson 2009-04-22 12:54:05 EDT
This is on the preview blocker list: can any of you confirm the claimed fix? -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers
Comment 21 Jesse Keating 2009-04-23 14:10:17 EDT
I'm not able to see it anymore.