Bug 510563 - dracut borking raidsets and not finding root
Summary: dracut borking raidsets and not finding root
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: dracut
Version: 11
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Harald Hoyer
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-07-09 19:39 UTC by Clyde E. Kunkel
Modified: 2009-07-19 20:49 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-07-19 20:49:47 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
debugging info requested on wiki (3.76 KB, text/plain)
2009-07-09 19:39 UTC, Clyde E. Kunkel
no flags Details

Description Clyde E. Kunkel 2009-07-09 19:39:32 UTC
Created attachment 351138 [details]
debugging info requested on wiki

Description of problem:

Maybe two problems:  1) dracut is borking my raidsets: msg during dracut boot saying: delaying recovery of md1 until md0 has finished (they share one or more physical devices).

(The raidsets were intact and NOT in need of recovery.  After trying dracut md1 had to have its spare rebuilt.) 

Then, 12 msgs saying:  no raid disks.

Only thing I can think of here is that I have a container of 11 raid partitions (container + 11 = 12).

Then a msg:  No root device found

and then nothing.  ctrl-alt-del to reboot (maybe second prob (new bz?)) 


Version-Release number of selected component (if applicable):
dracut-0.4-1.fc11.x86_64

How reproducible:

every time


Steps to Reproduce:
1. sudo dracut -f -v dracut-2.6.29.5-191.fc11.x86_64 2.6.29.5-191.fc11.x86_64
2. reboot using dracut image
3.
  
Actual results:
as above


Expected results:
good boot

Additional info:

Don't have a serial hookup to another machine.
still can't -a bebug.
Also, same results when trying with a Fedora 11 VM guest.

Comment 1 Jóhann B. Guðmundsson 2009-07-17 17:51:36 UTC
A new release has just landed in koji with several improvements ( man dracut to see added kernel command line parameters check out the raid section).. 

Download:
F-11: http://koji.fedoraproject.org/koji/buildinfo?buildID=114853
Rawhide: http://koji.fedoraproject.org/koji/buildinfo?buildID=114854 

Create your image with the debug module enable:

# dracut -a debug /boot/debug-$(uname -r) $(uname -r)

Add "rdshell" to the kernel command line, remove "rhgb" and "quiet" and boot the image.

Once you are dropped to a shell do:

# dmesg|grep dracut

Add the output and your kernel command line to the bug report.

There's also have been added a helper tool, which generates the kernel command line ( dracut-gencmdline ) 

Could you retest in with that version and report back thank you..

Comment 2 Jóhann B. Guðmundsson 2009-07-17 17:52:54 UTC
btw "still can't -a bebug." it's -a debug :)

Comment 3 Clyde E. Kunkel 2009-07-19 20:49:47 UTC
Now working correctly.  Closing this bz; HOWEVER, note that dracut seems to still have a problem with raid sets, even tho they are correctly started once normal init is entered. 

grub kernel line:

kernel /vmlinuz-2.6.29.5-191.fc11.x86_64 ro root=/dev/VolGroup02/fedora11x64 rd_LVM_VG=VolGroup02 KEYBOARDTYPE=pc KEYTABLE=us SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 rdshell

$ dmesg | grep dracut
dracut: Starting plymouth daemon
dracut: Scanning for dmraid devices 
dracut: no raid disks
dracut: Scanning devices sda12 sdc2 sdc9 sdd2 sdd5 sde10 sdf15 sdf7  for LVM volume groups VolGroup02 
dracut: Reading all physical volumes. This may take a while...
dracut: Found volume group "VolGroup01" using metadata type lvm2
dracut: Found volume group "VolGroup03" using metadata type lvm2
dracut: Found volume group "VolGroup00" using metadata type lvm2
dracut: Found volume group "VolGroup02" using metadata type lvm2
dracut: 2 logical volume(s) in volume group "VolGroup02" now active
dracut: Assembling MD RAID arrays
dracut: Starting MD RAID array /dev/md0
dracut: mdadm: failed to run array /dev/md0: Device or resource busy
dracut: mdadm: no recognisable superblock on /dev/md0.
dracut: Starting MD RAID array /dev/md1
dracut: mdadm: failed to run array /dev/md1: Device or resource busy
dracut: mdadm: no recognisable superblock on /dev/md1.
[kunkelc@P5K-EWIFI ~]$ mdadm --detail /dev/md*
mdadm: /dev/md does not appear to be an md device
mdadm: cannot open /dev/md0: Permission denied
mdadm: cannot open /dev/md1: Permission denied
mdadm: cannot open /dev/md127: Permission denied
[kunkelc@P5K-EWIFI ~]$ sudo mdadm --detail /dev/md*
mdadm: /dev/md does not appear to be an md device
/dev/md0:
        Version : 0.90
  Creation Time : Wed Jul  1 19:06:13 2009
     Raid Level : raid10
     Array Size : 20482560 (19.53 GiB 20.97 GB)
  Used Dev Size : 10241280 (9.77 GiB 10.49 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Jul 19 16:30:24 2009
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 64K

           UUID : c3c04da7:e9312952:7afce992:f879cb3e (local to host P5K-EWIFI.localdomain)
         Events : 0.108

    Number   Major   Minor   RaidDevice State
       0       8       13        0      active sync   /dev/sda13
       1       8       27        1      active sync   /dev/sdb11
       2       8       76        2      active sync   /dev/sde12
       3       8      101        3      active sync   /dev/sdg5
/dev/md1:
        Version : 0.90
  Creation Time : Wed Jul  1 19:16:48 2009
     Raid Level : raid5
     Array Size : 204796416 (195.31 GiB 209.71 GB)
  Used Dev Size : 102398208 (97.65 GiB 104.86 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sun Jul 19 16:30:31 2009
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : a49cf091:2c75ee9b:7afce992:f879cb3e (local to host P5K-EWIFI.localdomain)
         Events : 0.104

    Number   Major   Minor   RaidDevice State
       0       8       26        0      active sync   /dev/sdb10
       1       8       75        1      active sync   /dev/sde11
       2       8       98        2      active sync   /dev/sdg2
/dev/md127:
        Version : ddf
     Raid Level : container
  Total Devices : 11

Working Devices : 11

  Member Arrays :

    Number   Major   Minor   RaidDevice

       0       8       15        -        /dev/sda15
       1       8       29        -        /dev/sdb13
       2       8       30        -        /dev/sdb14
       3       8       77        -        /dev/sde13
       4       8       78        -        /dev/sde14
       5       8      102        -        /dev/sdg6
       6       8      103        -        /dev/sdg7
       7       8      104        -        /dev/sdg8
       8       8      105        -        /dev/sdg9
       9       8      106        -        /dev/sdg10
      10       8       14        -        /dev/sda14


Note You need to log in before you can comment on or make changes to this bug.