Description of problem: this occurs in both kernel-2.6.17-1.2187_FC5 and in kernel-2.6.18-1.2200.fc5 this does NOT occur in kernel-2.6.17-1.2157_FC5 this is the last screen before the panic: ... Loading ext.ko module Loading dm-mod.ko module device-mapper: 4.6.0-ioctl (2006-02-17) initialized: dm-devel Loading dm-mirror.ko module Loading dm-zero.ko module Loading-dm-snapshot.ko module Trying to resume from /dev/VolGroup00/LogVol01 Unable to access resume device (/dev/VolGroup00/LogVol01) Creating rood device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init! Version-Release number of selected component (if applicable): both kernel-2.6.17-1.2187_FC5 and in kernel-2.6.18-1.2200.fc5 How reproducible: always on boot. Steps to Reproduce: 1.start the box in either of the two kernels 2. 3. Actual results: panic Expected results: normal boot Additional info: using the following (since it seems to panic on lvm problem) lvm2-2.02.01-1.2.1 system-config-lvm-1.0.18-1.2.FC5
both the new kernels are pristine, with no extra modules, btw.
Having the problem. System boots on 2.6.17-1.2174_FC5.
sheesh! somehow, in a fit of utter stupidity i typed 2157 instead of 2174. 2174 is what i actually use at this point. however, i can also boot into 2157 just fine, as well.
are any of you using raid ?
not here. BTW, the condition applies to 2.6.18-1.2239.fc5 kernel as well. box was clean fc4 install, upgraded to fc5. i'm afraid to try an upgrade to fc6 in case it won't recognize the lvm either.
yep - using lvm2 over raid. Getting a panic on a different machine as well, I am now getting a panic when using kernel-2.6.18-1.2239.fc5 but not with kernel-2.6.17-1.2187_FC5.
Please also see bug 220269. I am (the reporter of that bug) using RAID.
Strangely, I have this problem with kernel 2.6.18-1.2257.fc5smp but not with kernel 2.6.18-1.2239.fc5smp. Anyone have any ideas?
I have exactly the same problem with new 2.6.19-1.2895.fc6. initrd doesn't like logical volumes...I'm NOT using raid. Maybe initrd-2.6.19-1.2895.fc6.img was compiled without lvm support...it's much smaller than initrd-2.6.18- 1.2868.fc6.img I found https://fcp.surfsite.org/modules/newbb/viewtopic.php? topic_id=32786&forum=11&post_id=144762#forumpost144762
I fixed the problem when recompiled initrd - mv initrd-2.6.19-1.2895.fc6.img initrd-2.6.19-1.2895.fc6.img.orig mkinitrd --force-lvm-probe --with=ext3 /boot/initrd-2.6.19-1.2895.fc6.img 2.6.19-1.2895.fc6
Can anyone confirm this fix - what do i need to run mkinitrd? The kernel sources?
see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=224289 Someone confirmed my fix there... you don't need sources...check in /lib/modules if you have dir named as kernel like 2.6.19-1.2895.fc6. Then go to /boot and execute the commands above... The indication that lvm module has been included in the new image will be that size of the new image is about 800k bigger... This fix will fix the problem caused by lvm not being included into the image... I think RAID problem is separate...good luck...
(In reply to comment #7) > Please also see bug 220269. I am (the reporter of that bug) using RAID. If your LV involves raid5, you may also need to specify -preload=raid456 to the mkinitrd command. See bug 220269.
Brilliant, thanks Michael - that worked great! Have to say, i'm slightly disheartened that something so basic could have slipped past testing, in what appears to be multiple kernels, especially as partitions are LVM by default in new installs of FC.
CC me
More info: This happened to me on a fresh, pristine, bare install of F7. I got a kernel panic on boot IF I installed to LVM, but NOT if I installed to a normal ext3 system. How to reproduce: Install onto LVM.
Fedora apologizes that these issues have not been resolved yet. We're sorry it's taken so long for your bug to be properly triaged and acted on. We appreciate the time you took to report this issue and want to make sure no important bugs slip through the cracks. If you're currently running a version of Fedora Core between 1 and 6, please note that Fedora no longer maintains these releases. We strongly encourage you to upgrade to a current Fedora release. In order to refocus our efforts as a project we are flagging all of the open bugs for releases which are no longer maintained and closing them. http://fedoraproject.org/wiki/LifeCycle/EOL If this bug is still open against Fedora Core 1 through 6, thirty days from now, it will be closed 'WONTFIX'. If you can reporduce this bug in the latest Fedora version, please change to the respective version. If you are unable to do this, please add a comment to this bug requesting the change. Thanks for your help, and we apologize again that we haven't handled these issues to this point. The process we are following is outlined here: http://fedoraproject.org/wiki/BugZappers/F9CleanUp We will be following the process here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this doesn't happen again. And if you'd like to join the bug triage team to help make things better, check out http://fedoraproject.org/wiki/BugZappers
i suppose that all of us who were affected by this bug have long since moved on to newer releases of fedora, so it seems appropriate to close it. however, what i think is missing here is the most important part of fixing a bug -- even if you aren't going to. and that is finding how something so fundamental and so completely disastrous happened in the first place, AND putting appropriate solutions in place to ensure that it can't happen again. how could such a fundamental problem end up happening? how could testing miss it? WAS it tested, or did someone bypass normal quality procedures? HOW do we be sure that it can't happen again? that's pretty basic root cuase analysis, you know. frankly, i think that was the most important issue in this bug. and it appears to have been completely dropped on the floor. i've worked waaay too much time in organizations which are swamped in bug repor5ts. the process of ignoring a bug, hoping that it will go away, and waiting till it's old, then closing it, is symptomatic of organizations which end up having it come back again. it's not symptomatic of organizations which end up providing high quality systems. i'm rather disappointed that that happened to this bug. and particularly disappointed that it is being allowed to happen again.
Thank you for your update.