Hide Forgot
Description of problem: 2.6.40.8-4.fc15.x86_64 and earlier kernels have worked (and 2.6.40.8-4 continues to work) and boot fine. However, 2.6.41.4-1.fc15.x86_64 fails to boot. Images of kernel crash dumps during boot are at: https://picasaweb.google.com/zenczykowski/FedoraKernelRaidFailure?authuser=0&feat=directlink Version-Release number of selected component (if applicable): 2.6.41.4-1.fc15.x86_64 How reproducible: all 10 or so boots into this kernel version have failed Additional info, judging from the first screen shot with stack trace, the problem is probably md1_raid1 (sd_prep_fn) related.
(In reply to comment #1) > Description of problem: > > 2.6.40.8-4.fc15.x86_64 and earlier kernels have worked (and 2.6.40.8-4 > continues to work) and boot fine. > > However, 2.6.41.4-1.fc15.x86_64 fails to boot. > > Images of kernel crash dumps during boot are at: > https://picasaweb.google.com/zenczykowski/FedoraKernelRaidFailure?authuser=0&feat=directlink > Please attach the images to the bug report.
Created attachment 549936 [details] Image 1
Created attachment 549947 [details] Image 2
Created attachment 549948 [details] Image 3
Created attachment 549949 [details] Image 4
Created attachment 549950 [details] Image 5
Created attachment 549951 [details] Image 6
Created attachment 549952 [details] Image 7
As requested I've attached the images to the bug report (terrible upload interface). --- As an additional note, I've now got another machine that exhibits similar symptoms: it boots with 2.6.40 and fails to boot with 2.6.41. I'm guessing it's the same failure mode, although once again, I can't actually log the messages and they scrollback very quickly on the 80x25 terminal. This new configuration is significantly simpler - no LUKS and no LVM. Simply software raid. sda/sdb are 2.5" SATA drives, sdc/sdd are USB mass storage devices. Here's what /prod/mdstat looks like on 2.6.40.8-4 # cat /proc/mdstat Personalities : [raid1] [raid0] md4 : active raid0 sda4[0] sdb4[1] 929536768 blocks 64k chunks md3 : active raid1 sda3[0](W) sdb3[1](W) 15791544 blocks super 1.0 [3/2] [UU_] bitmap: 7/8 pages [28KB], 1024KB chunk md2 : active raid0 sdc1[0] sdd1[1] 15791616 blocks 64k chunks md1 : active raid1 sda2[0](W) sdb2[1](W) 7550536 blocks super 1.0 [3/2] [UU_] bitmap: 8/8 pages [32KB], 512KB chunk md0 : active raid1 sda1[0](W) sdb1[1](W) 273060 blocks super 1.0 [3/2] [UU_] bitmap: 1/2 pages [4KB], 128KB chunk unused devices: <none> As you can see, just raid0/1 and not super complex. /dev/md{0,1,3,4} are ext{3,4,4,4} file systems.
Did this resolve itself with 2.6.43/3.3?
Yes, at some point around 2.6.42/43 3.2/3.3 this resolved itself. Closing the bug.