Bug 548278 - udev doesn't create /dev/ entries for partitions of sda, sdb, etc.
Summary: udev doesn't create /dev/ entries for partitions of sda, sdb, etc.
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: udev
Version: 12
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Harald Hoyer
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-12-17 03:18 UTC by Need Real Name
Modified: 2010-12-04 01:33 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-12-04 01:33:31 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Need Real Name 2009-12-17 03:18:29 UTC
Description of problem:
I have 3 SATA disks on my system each with multiple partitions including ext3, ntfs (ntfs-3g), vfat, software RAID1, software RAID1 + LVM.

Under FC12, only the /dev/sda, /dev/sdb, and /dev/sdc disk entries appear but *none* of the partitions (i.e., /dev/sdaN, /dev/sdbN, /dev/sdcN) appear in /dev.

The system boots ok and indeed all the partitions even mount ok (presumably because I reference them all by 'label' in my fstab). When I run 'df', I notice that all but the boot partition (which is RAID1 and is listed conventionally as /dev/md0) show up as /dev/dm-NN where NN is a 1 or 2 digit number that has seemingly no correspondence with the original partition numbers. The numbers are non-sequential in mtab and seem almost randomly assigned. This occurs both for the normal disk partitions and for lvm partitions

Interestingly, 'fdisk -l /dev/sda' (and similarly for sdb and sdc) still shows the normal /dev/sdaN partitions.

Previously under FC8 on the same system and also now with FC12 under vmware, all the normal /dev/sdaN, /dev/sdbN, /dev/sdcN partition entries appear as udev devices. Previously as expected /etc/mtab used the standard /dev/sdaN notation for mounted physical(i.e.,nonRAID/non-LVM) disk partitions. 

Previously, under FC8, LVM partitions showed up using the human-readable notation /dev/mapper/lvm-raid-<label name>. Under FC12 with vmware, the lvm partitions show up as (sequential) /dev/dm-N which while less readable is at least somewhat understandable.

However, on FC12 on my real-world server, the seemingly random /dev/dm-NN numbers make it impossible for me to know with any certainty how to reference individual partitions. In particular, I don't even know how to rebuild my RAID1 array since I am not sure what dm-NN numbers match up with which /dev/sdxN partitions.

Note that my real-world (i.e., non-vmware) hardware is an ASUS p4pe motherboard with integrated Promise SATA/RAID 20376 controller. Although old, this hardware is *very stable. In fact, it had 340 days of uptime just before I rebooted it last night to install Fedora 12. Note that I am *not* doing hardware RAID - I just use the controller for controlling the SATA drives.

As referenced in a few other similar (but *different*) bug reports, I tried erasing the dmraid metadata using 'dmraid -r -E'. Of course, the system was then unable to find the disks until I manually assigned them by entering the Promise BIOS. Once I did this the system booted fine though I again had the weird /dev/dm-NN device entries with *no* partitions entries on sda, sdb, or sdc.

The same lack of /dev/sdxN partition entries occurred when I boot from the Fedora12 LiveCD and look to mount my phsical hard drive partitions.

Again I want to emphasize that this bug seems to be different from other bugzilla entries since my partitions are being recognized by the kernel as evidenced by the fact that they can even be mounted. My only problem is that the "physical" /dev/sdxN entries are not being created by udev.

Finally, I'm not sure if this is related but I had trouble at first installing grub when I booted from the LiveCD and tried to install grub on my newly installed FC12 system on the hard drives. When I used the grub shell to run first 'root (hd0,4)' then 'setup (hd0)' (as I had previously done on FC8), I got the error message "grub Error 22t: No such partition." Note that (hd0,4) is correct as verified by "find /grub/stage1". When I tried running "grub-install --root-directory=/ /dev/sda", I got a similar error message saying ": Not found or not a block device." I finally was able to get the grub shell to work by manually specifying "device (hd0) /dev/sda" though this was never previously needed. Now in retrospect, I wonder whether this is all related.

Any idea what might be going on here?
In particular, why isn't udev generating the normal /dev/sdxN hard drive block device entries?

How reproducible:
100%

Comment 1 Need Real Name 2009-12-22 22:12:22 UTC
Note that the sdxN entries seem to be missing from /sys.

Specifically the partitions are also missing from the following /sys directories:
/sys/devices/pci0000:00/0000:00:1e.0/0000:02:04.0/host2/target2:0:0/2:0:0:0/block/sda
/sys/devices/pci0000:00/0000:00:1e.0/0000:02:04.0/host2/target2:0:0/2:0:0:0/block/sdb
/sys/devices/pci0000:00/0000:00:1e.0/0000:02:04.0/host2/target2:0:0/2:0:0:0/block/sdc

Whereas on a similar vmware setup the partitions are all present both in /dev and in the /sys/devices tree.

Again all I have are the 'virtual' devices entries dm-NN which appear as /dev/dm-NN and in the sys tree as /sys/devices/virtual/block/dm-NN

Comment 2 Need Real Name 2009-12-22 22:31:06 UTC
As another data point. On my regular (but seemingly broken) system:

$ ls /proc/partitions
major minor  #blocks  name

   8        0  976762584 sda
   8       16  976762584 sdb
   8       32  199148544 sdc
 253        0  199148480 dm-0
 253        1  199141708 dm-1
 253        2  199141677 dm-2
 253        3  976762496 dm-3
 253        4   24410736 dm-4
 253        5  952349265 dm-5
 253        6      96358 dm-6
 253        7  849717981 dm-7
 253        8    4883728 dm-8
 253        9   97651071 dm-9
 253       10  976762496 dm-10
 253       11   24410736 dm-11
 253       12  952349265 dm-12
 253       13      96358 dm-13
 253       14  849717981 dm-14
 253       15    4883728 dm-15
 253       16   97651071 dm-16
   9        0      96256 md0
   9        1  849717888 md1
 253       17    5242880 dm-17
 253       18   26214400 dm-18
 253       19  398827520 dm-19
 253       20  419430400 dm-20

While the analogous (and working) VMWare system properly shows all the /dev/sdxN partitions:

$ ls /proc/partitions
major minor  #blocks  name

   8        0   33554432 sda
   8        1      96358 sda1
   8        2          1 sda2
   8        5      96358 sda5
   8        6   31326718 sda6
   8        7     104391 sda7
   8        8    1927768 sda8
   8       16   33554432 sdb
   8       17      96358 sdb1
   8       18          1 sdb2
   8       21      96358 sdb5
   8       22   31326718 sdb6
   8       23     104391 sdb7
   8       24    1927768 sdb8
   8       32    5242880 sdc
   8       33          1 sdc1
   8       37    5237127 sdc5
   9        0      96256 md0
   9        1   31326592 md1
 253        0     512000 dm-0
 253        1   26214400 dm-1
 253        2    3145728 dm-2
 253        3    1454080 dm-3

Comment 3 Need Real Name 2009-12-22 22:39:25 UTC
It seems like the problem has something to do with all my normal partitions being setup as device mapper partitions:

On my broken regular system:
$ dmsetup info
pdc_bifeafjhe   (253, 0)
pdc_bggcadjcgp7 (253, 8)
pdc_bggcadjcgp6 (253, 7)
pdc_bggcadjcgp5 (253, 6)
lvm--raid-media (253, 20)
pdc_bdcefciig   (253, 10)
pdc_bggcadjcg   (253, 3)
pdc_bdcefciigp8 (253, 16)
pdc_bggcadjcgp2 (253, 5)
lvm--raid-scratch       (253, 19)
pdc_bdcefciigp7 (253, 15)
pdc_bggcadjcgp1 (253, 4)
pdc_bdcefciigp6 (253, 14)
pdc_bifeafjhep5 (253, 2)
pdc_bdcefciigp5 (253, 13)
pdc_bifeafjhep4 (253, 1)
pdc_bdcefciigp2 (253, 12)
lvm--raid-swap  (253, 17)
pdc_bdcefciigp1 (253, 11)
lvm--raid-root  (253, 18)
pdc_bggcadjcgp8 (253, 9)

Whereas my working VMWare system just lists the actual lvm partitions that I have:
$ dmsetup ls
lvm--raid-media (253, 3)
lvm--raid-scratch       (253, 2)
lvm--raid-swap  (253, 0)
lvm--raid-root  (253, 1)

Comment 4 Need Real Name 2009-12-22 23:05:08 UTC
More info:
Oddly, the partitions initially show up properly in /var/log/messages on boot:

kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
kernel: ata3.00: ATA-8: ST31000340AS, SD15, max UDMA/133
kernel: ata3.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 0/32)
kernel: ata3.00: configured for UDMA/133
kernel: scsi 2:0:0:0: Direct-Access     ATA      ST31000340AS     SD15 PQ: 0 ANSI: 5
kernel: sd 2:0:0:0: Attached scsi generic sg2 type 0
kernel: sd 2:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
kernel: sd 2:0:0:0: [sda] Write Protect is off
kernel: sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
kernel: sda: sda1 sda2 < sda5 sda6 sda7 sda8 >
kernel: sd 2:0:0:0: [sda] Attached SCSI disk
kernel: ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
kernel: ata4.00: ATA-8: ST31000340AS, SD15, max UDMA/133
kernel: ata4.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 0/32)
kernel: ata4.00: configured for UDMA/133
kernel: scsi 3:0:0:0: Direct-Access     ATA      ST31000340AS     SD15 PQ: 0 ANSI: 5
kernel: sd 3:0:0:0: Attached scsi generic sg3 type 0
kernel: sd 3:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
kernel: sd 3:0:0:0: [sdb] Write Protect is off
kernel: sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
kernel: sdb: sdb1 sdb2 < sdb5 sdb6 sdb7 sdb8 >
kernel: sd 3:0:0:0: [sdb] Attached SCSI disk
kernel: sd 3:0:0:0: [sdb] Attached SCSI disk
kernel: ata5.00: ATA-7: Maxtor 6B200R0, BAH41BM0, max UDMA/133
kernel: ata5.00: 398297088 sectors, multi 0: LBA48
kernel: ata5.00: configured for UDMA/133
kernel: scsi 4:0:0:0: Direct-Access     ATA      Maxtor 6B200R0   BAH4 PQ: 0 ANSI: 5
kernel: sd 4:0:0:0: Attached scsi generic sg4 type 0
kernel: sd 4:0:0:0: [sdc] 398297088 512-byte logical blocks: (203 GB/189 GiB)
kernel: sd 4:0:0:0: [sdc] Write Protect is off
kernel: sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
kernel: sdc: sdc4 < sdc5 >
kernel: sd 4:0:0:0: [sdc] Attached SCSI disk

So far, this is the same as my working VWware system. 
But then I start getting all this device mapper stuff which is not present on my normal VMware test system

kernel: dracut: Scanning for dmraid devices
kernel: dracut: Found dmraid sets:
kernel: dracut: pdc_bifeafjhe pdc_bggcadjcg pdc_bdcefciig
kernel: dracut: Activating pdc_bifeafjhe
kernel: dracut: The dynamic shared library "libdmraid-events-pdc.so" could not be loaded:
kernel: dracut: libdmraid-events-pdc.so: cannot open shared object file: No such file or directory
kernel: dracut: RAID set "pdc_bifeafjhe" was activated
kernel: dracut: Activating pdc_bggcadjcg
kernel: dracut: The dynamic shared library "libdmraid-events-pdc.so" could not be loaded:
kernel: dracut: libdmraid-events-pdc.so: cannot open shared object file: No such file or directory
kernel: dracut: RAID set "pdc_bggcadjcg" was activated
kernel: dracut: Activating pdc_bdcefciig
kernel: dracut: The dynamic shared library "libdmraid-events-pdc.so" could not be loaded:
kernel: dracut: libdmraid-events-pdc.so: cannot open shared object file: No such file or directory
kernel: dracut: RAID set "pdc_bdcefciig" was activated

Comment 5 Need Real Name 2009-12-22 23:09:22 UTC
Also, my broken system has:
$ ls /dev/block/dm*
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-0 -> ../devices/virtual/block/dm-0/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-1 -> ../devices/virtual/block/dm-1/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-10 -> ../devices/virtual/block/dm-10/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-11 -> ../devices/virtual/block/dm-11/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-12 -> ../devices/virtual/block/dm-12/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-13 -> ../devices/virtual/block/dm-13/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-14 -> ../devices/virtual/block/dm-14/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-15 -> ../devices/virtual/block/dm-15/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-16 -> ../devices/virtual/block/dm-16/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-17 -> ../devices/virtual/block/dm-17/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-18 -> ../devices/virtual/block/dm-18/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-19 -> ../devices/virtual/block/dm-19/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-2 -> ../devices/virtual/block/dm-2/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-20 -> ../devices/virtual/block/dm-20/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-3 -> ../devices/virtual/block/dm-3/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-4 -> ../devices/virtual/block/dm-4/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-5 -> ../devices/virtual/block/dm-5/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-6 -> ../devices/virtual/block/dm-6/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-7 -> ../devices/virtual/block/dm-7/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-8 -> ../devices/virtual/block/dm-8/
lrwxrwxrwx. 1 root root 0 Dec 22 18:06 dm-9 -> ../devices/virtual/block/dm-9/

While my working VMWare system has (correctly):
lrwxrwxrwx. 1 root root 0 Dec 20 20:44 dm-0 -> ../devices/virtual/block/dm-0/
lrwxrwxrwx. 1 root root 0 Dec 20 20:44 dm-1 -> ../devices/virtual/block/dm-1/
lrwxrwxrwx. 1 root root 0 Dec 20 20:44 dm-2 -> ../devices/virtual/block/dm-2/
lrwxrwxrwx. 1 root root 0 Dec 20 20:44 dm-3 -> ../devices/virtual/block/dm-3/

Comment 6 Need Real Name 2009-12-23 00:01:09 UTC
OK - I'm now thinking that this is a problem/incompatibility with the integrated Promise 20376 SATA/RAID controller on my motherboard since 'dmraid -r' gives the following info:

  /dev/sdc: pdc, "pdc_bifeafjhe", stripe, ok, 398296960 sectors, data@ 0
  /dev/sdb: pdc, "pdc_bggcadjcg", stripe, ok, 1953524992 sectors, data@ 0
  /dev/sda: pdc, "pdc_bdcefciig", stripe, ok, 1953524992 sectors, data@ 0

While in VMWare, I get:
  no raid disks

So, this seems to be the key difference between the broken situation on my normal computer vs. the working situation on the VMware instance.

Now, recall that to use any SATA disks on my motherboard (P4PE), you must use the 20376 SATA/RAID controller even if you are not using hardware RAID (which I am not). You do that by setting up 3 separate striped arrays (Array1, Array2, Array3) with only one disk in each.

So apparently, dracut is thinking that these are real RAID arrays rather than just plain SATA disks.

Note that I did not have any such problem in FC8 (which is what I just upgraded from) or FC6. So, somehow proper handling of this motherboard/SATA/RAID controller has been lost...

Comment 7 Need Real Name 2009-12-23 02:21:55 UTC
OK - so now I tried turning off detection of dmraid by adding the line rd_NO_DM to the kernel (and I also re-ran 'dracut' to make a new initramfs for good measure).

This had the positive effect of removing the lines about dracut (e.g., "kernel: dracut: Scanning for dmraid devices").

It also created *some* but not *all* of my sdxN devices.

$ ls /dev/sd??
/dev/sda5  /dev/sda6  /dev/sda7  /dev/sda8

However, I am still missing most of my partitions:
sda: /dev/sda1 /dev/sda2
sdb: /dev/sdb1 /dev/sdb2 /dev/sdb5 /dev/sdb6 /dev/sdb7 /dev/sdb8
sdc: /dev/sdc4 /dev/sdc5

Even worse /dev/sda1 is now not available at all since dmraid is turned off for that device.

Now I am really confused - why would only certain partitions be recreated. I fail to see any rhyme or reason to which /dev/sdxN partitions are created and which aren't.

Note on my machine:
 /dev/sd[ab]1 are two separate NTFS-3g partitions
 /dev/sd[ab]2 are both extended partitions
 /dev/sd[ab]5 is part of a RAID1 array used for /boot
 /dev/sd[ab]6 is part of an LVM/RAID1 array used for swap, root, and several other partitions
 /dev/sd[ab]7 are two separate VFAT partitions
 /dev/sd[ab]8 are two separate NTFS-3g partitions
 /dev/sdc4 is an extended partition
 /dev/sdc5 is an ext3 partition


Here is some more data:
$ cat /proc/partitions
major minor  #blocks  name

   8        0  976762584 sda
   8        5      96358 sda5
   8        6  849717981 sda6
   8        7    4883728 sda7
   8        8   97651071 sda8
   8       16  976762584 sdb
   8       32  199148544 sdc
   9        0      96256 md0
   9        1  849717888 md1
 253        0    5242880 dm-0
 253        1   26214400 dm-1
 253        2  398827520 dm-2
 253        3  419430400 dm-3
 253        4  199148480 dm-4
 253        5  199141708 dm-5
 253        6  199141677 dm-6
 253        7  976762496 dm-7
 253        8   24410736 dm-8
 253        9  952349265 dm-9
 253       10      96358 dm-10
 253       11  849717981 dm-11
 253       12    4883728 dm-12
 253       13   97651071 dm-13

$ dmraid -r
/dev/sdc: pdc, "pdc_bifeafjhe", stripe, ok, 398296960 sectors, data@ 0
/dev/sdb: pdc, "pdc_bggcadjcg", stripe, ok, 1953524992 sectors, data@ 0
/dev/sda: pdc, "pdc_bdcefciig", stripe, ok, 1953524992 sectors, data@ 0

So the raid partitions are still there (and as I mentioned before, I can't delete them without messing up ability of the BIOS to recognize the disks on boot.

Running 'dmraid -s' interestingly shows that /dev/sda is "Set" (but not active) whil /dev/sdb and /dev/sdc are "Active Set" -- it's not clear to me though how/why they got activated if I turned off dmraid.

Note: I recall having issues with the Promise Controller early Fedora Core 1 that were long since fixed... (https://bugzilla.redhat.com/show_bug.cgi?id=216078)

Comment 8 Need Real Name 2009-12-23 03:15:43 UTC
Another update:
I can create all the missing /dev/sdxN partitions by running: 'partprobe -s'
(and also I think by running  'blockdev --rereadpt /dev/sdx' on each device x)

However, if I do that, then I can no longer mount those devices -- I get the following types of error messages when mounting:
     ntfs-3g-mount: mount failed: Device or resource busy
     mount: /dev/sdc5 already mounted or /mnt/mymountpoint busy

So back to square zero...

Comment 9 Harald Hoyer 2009-12-23 12:56:28 UTC
Does adding "rd_NO_DM rd_NO_MDIMSM" fix your problem?

Comment 10 Need Real Name 2009-12-24 02:38:47 UTC
I tried rd_NO_DM as noted in comment #7 but that had the weird affect of only generating some (but not all) of the /dev/sdaN partitions on /dev/sda but it didn't do anything for /dev/sdb and /dev/sdc.
Even worse, the partitions of /dev/sda that were not generated became not accessible even using /dev/dm-N (whereas they had all been accessible as /dev/dm-N devices before adding the kernel parameter).

Per your suggestion, I nevertheless tried adding both rd_NO_DM and rd_NO_NDIMSM and I got the same situation as above.

If you haven't already, please read the detailed chain of earlier posts where I outline the troubleshooting I have done to date. Perhaps the details will be helpful in suggesting solutions or additional troubleshooting approaches.

Thanks!

Comment 11 Stijn Hoop 2010-01-09 19:15:31 UTC
Might this be an instance of bug 543749 as well?

To test, edit

/usr/share/dracut/modules.d/90mdraid/65-md-incremental-imsm.rules

per bug 543749 comment #1

Then generate a new initrd by

# dracut /boot/initrd-TEST.img $(uname -r)

And boot that initrd from GRUB.

Comment 12 Need Real Name 2010-01-11 07:35:24 UTC
Interesting bug report... however:
I don't have a file called:
       /etc/udev/rules.d/65-md-incremental-imsm.rules
(and rpm -qf doesn't think I'm missing it either)
Though I do have a file called:
/usr/share/dracut/modules.d/90mdraid/65-md-incremental-imsm.rules

Is it sufficient to just edit the /usr/share/dracut version and run dracut to generate a new initrd or do I need to do something to get the file in my /etc/udev/rules.d folder?

Comment 13 Stijn Hoop 2010-01-11 08:26:34 UTC
Yes, you need to edit the /usr/share/dracut version, see my comment #11 :)

Comment 14 Need Real Name 2010-01-19 22:03:37 UTC
OK - I tried that but it failed to boot properly.
The last lines on the console screen were:

    WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/.
    FATAL: Could not load /lib/modules/2.6.31.9-174.fc12.i686/modules.dep: No such file or directory
    (repeated several times)

    No root device found

    Boot has failed sleeping forever

I imagine the real problem is "no root device found"

Note my grub.conf stanza is:

title Fedora (2.6.31.9-174.fc12.i686)
        root (hd0,4)
        kernel /vmlinuz-2.6.31.9-174.fc12.i686 ro root=LABEL=root  LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us agp=off rhgb quiet
        initrd /initramfs-2.6.31.9-174.fc12.i686.img

Comment 15 Harald Hoyer 2010-01-26 11:25:31 UTC
(In reply to comment #14)
> OK - I tried that but it failed to boot properly.
> The last lines on the console screen were:
> 
>     WARNING: Deprecated config file /etc/modprobe.conf, all config files belong
> into /etc/modprobe.d/.
>     FATAL: Could not load /lib/modules/2.6.31.9-174.fc12.i686/modules.dep: No
> such file or directory
>     (repeated several times)
> 
>     No root device found
> 
>     Boot has failed sleeping forever
> 
> I imagine the real problem is "no root device found"
> 
> Note my grub.conf stanza is:
> 
> title Fedora (2.6.31.9-174.fc12.i686)
>         root (hd0,4)
>         kernel /vmlinuz-2.6.31.9-174.fc12.i686 ro root=LABEL=root 
> LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us agp=off
> rhgb quiet
>         initrd /initramfs-2.6.31.9-174.fc12.i686.img    


    FATAL: Could not load /lib/modules/2.6.31.9-174.fc12.i686/modules.dep: No
such file or directory

indicates that your initramfs was build for the wrong kernel!

Comment 16 Need Real Name 2010-01-26 14:37:54 UTC
Doh - that was dumb of me - yum upgraded me to a new kernel but I ran dracut on the old one...

OK - I redid it and it booted ok, but I still get all the /dev/dm-NN listings rather than the desired /dev/sdaN, /dev/sdbN, /dev/sdcN listings.

Note I unpacked the initramfs and verified that the file etc/udev/rules.d/65-md-incremental-imsm.rules now appears in the initramfs with the following lines properly commented out:
  #ENV{DEVTYPE}!="partition", \
  #RUN+="/sbin/partx -d --nr 1-1024 $env{DEVNAME}"

Note that I still don't have the rule 65-md-incremental-imsm.rules in my actual /etc/udev/rules.d folder -- it appears only in the initramfs.

The problem seems (to me) to be that dracut is still "Scanning for dmraid devices" even though I am not really using dmraid  (i.e. hardware RAID). Again the problem is probably due to the fact that in order for the my motherboard to recognize the SATA drives, the built-in Promise 20376 SATA/RAID controller must be configured to set up each SATA drive as a separate RAIDO device with each drive as the sole element. Note however that I never had this problem in FC8, FC6, or FC1 -- so something must have changed that makes this controller no longer work properly.

Is there any way to manually force dracut not to scan for dmraid devices?

Here is the detail from /var/log/messages:

   ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
   ata3.00: ATA-8: ST31000340AS, SD15, max UDMA/133
   ata3.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 0/32)
   ata3.00: configured for UDMA/133
   scsi 2:0:0:0: Direct-Access     ATA      ST31000340AS     SD15 PQ: 0 ANSI: 5
   sd 2:0:0:0: Attached scsi generic sg2 type 0
   sd 2:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
   sd 2:0:0:0: [sda] Write Protect is off
   sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
   sda: sda1 sda2 < sda5 sda6 sda7 sda8 >
   sd 2:0:0:0: [sda] Attached SCSI disk
   ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
   ata4.00: ATA-8: ST31000340AS, SD15, max UDMA/133
   ata4.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 0/32)
   ata4.00: configured for UDMA/133
   scsi 3:0:0:0: Direct-Access     ATA      ST31000340AS     SD15 PQ: 0 ANSI: 5
   sd 3:0:0:0: Attached scsi generic sg3 type 0
   sd 3:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
   sd 3:0:0:0: [sdb] Write Protect is off
   sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
   sdb: sdb1 sdb2 < sdb5 sdb6 sdb7 sdb8 >
   sd 3:0:0:0: [sdb] Attached SCSI disk
   ata5.00: ATA-7: Maxtor 6B200R0, BAH41BM0, max UDMA/133
   ata5.00: 398297088 sectors, multi 0: LBA48
   ata5.00: configured for UDMA/133
   scsi 4:0:0:0: Direct-Access     ATA      Maxtor 6B200R0   BAH4 PQ: 0 ANSI: 5
   sd 4:0:0:0: Attached scsi generic sg4 type 0
   sd 4:0:0:0: [sdc] 398297088 512-byte logical blocks: (203 GB/189 GiB)
   sd 4:0:0:0: [sdc] Write Protect is off
   sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
   sdc: sdc4 < sdc5 >
   sd 4:0:0:0: [sdc] Attached SCSI disk
   generic-usb 0003:051D:0002.0001: hiddev96,hidraw0: USB HID v1.10 Device [Ame\
rican Power Conversion Back-UPS RS 900 FW:9.o2 .D USB FW:o2 ] on usb-0000:00:1d\
.0-1/input0
   dracut: Scanning for dmraid devices
   dracut: Found dmraid sets:
   dracut: pdc_bifeafjhe pdc_bggcadjcg pdc_bdcefciig
   dracut: Activating pdc_bifeafjhe
   dracut: The dynamic shared library "libdmraid-events-pdc.so" could not be lo\
aded:
   dracut: libdmraid-events-pdc.so: cannot open shared object file: No such file or directory
   dracut: RAID set "pdc_bifeafjhe" was activated
   dracut: Activating pdc_bggcadjcg
   dracut: The dynamic shared library "libdmraid-events-pdc.so" could not be lo\
aded:
   dracut: libdmraid-events-pdc.so: cannot open shared object file: No such file or directory
   dracut: RAID set "pdc_bggcadjcg" was activated
   dracut: Activating pdc_bdcefciig
   dracut: The dynamic shared library "libdmraid-events-pdc.so" could not be loaded:
   dracut: libdmraid-events-pdc.so: cannot open shared object file: No such file or directory
   dracut: RAID set "pdc_bdcefciig" was activated
   dracut: Autoassembling MD Raid
   md: md0 stopped.
   md: bind<dm-6>
   md: bind<dm-13>
   md: raid1 personality registered for level 1
   raid1: raid set md0 active with 2 out of 2 mirrors
   md0: bitmap initialized from disk: read 1/1 pages, set 2 bits
   created bitmap (12 pages) for device md0
   md0: detected capacity change from 0 to 98566144
   dracut: mdadm: /dev/md0 has been started with 2 drives.
   md0: unknown partition table
   md: md1 stopped.
   md: bind<dm-7>
   md: bind<dm-14>
   raid1: raid set md1 active with 2 out of 2 mirrors
   md1: bitmap initialized from disk: read 13/13 pages, set 56 bits
   created bitmap (203 pages) for device md1
   md1: detected capacity change from 0 to 870111117312
   md1:
   dracut: mdadm: /dev/md1 has been started with 2 drives.
   unknown partition table
   dracut: Scanning devices md1  for LVM volume groups
   dracut: Reading all physical volumes. This may take a while...
   dracut: Found volume group "lvm-raid" using metadata type lvm2
   dracut: 4 logical volume(s) in volume group "lvm-raid" now active
   dracut: Autoassembling MD Raid
   EXT4-fs (dm-18): barriers enabled
   kjournald2 starting: pid 465, dev dm-18:8, commit interval 5 seconds
   EXT4-fs (dm-18): delayed allocation enabled
   EXT4-fs: file extents enabled
   EXT4-fs: mballoc enabled
   EXT4-fs (dm-18): mounted filesystem with ordered data mode
   dracut: Mounted root filesystem /dev/dm-18
   dracut: Loading SELinux policy
   type=1403 audit(1264496428.259:2): policy loaded auid=4294967295 ses=4294967295
   dracut: Switching root
   udev: starting version 145

Comment 17 Harald Hoyer 2010-01-26 14:47:28 UTC
(In reply to comment #16)
> The problem seems (to me) to be that dracut is still "Scanning for dmraid
> devices" even though I am not really using dmraid  (i.e. hardware RAID). 

add "rd_NO_DM" to the kernel command line

see the dracut man page:

rd_NO_DM
              disable DM RAID detection

Comment 18 Harald Hoyer 2010-01-26 14:49:11 UTC
And you might want to try:

http://admin.fedoraproject.org/updates/dracut-004-4.fc12

Comment 19 Harald Hoyer 2010-01-26 15:02:09 UTC
(In reply to comment #7)
> Note that I still don't have the rule 65-md-incremental-imsm.rules in my actual
> /etc/udev/rules.d folder -- it appears only in the initramfs.

That's ok.

Comment 20 Harald Hoyer 2010-01-26 15:09:39 UTC
dracut removes all normal partitions with the udev rule

ENV{DEVTYPE}!="partition", \
   RUN+="/sbin/partx -d --nr 1-1024 $env{DEVNAME}"

because normally partitions of drives which are part of a raid should not be visible.

with "rd_NO_DM" this rule should never be executed.

Comment 21 Need Real Name 2010-01-26 15:13:40 UTC
(In reply to comment #17)
> (In reply to comment #16)
> > The problem seems (to me) to be that dracut is still "Scanning for dmraid
> > devices" even though I am not really using dmraid  (i.e. hardware RAID). 
> 
> add "rd_NO_DM" to the kernel command line
> 
> see the dracut man page:
> 
> rd_NO_DM
>               disable DM RAID detection    

Perhaps that is the right way to go..
However, I tried that already (see
https://bugzilla.redhat.com/show_bug.cgi?id=548278#c7 for details) and it ended
up generating only some of the /dev/sdxN entries. But maybe we need to go
further down that path and figure out why even with dmraid turned off, some of
the partitions were still not found -- to me that is even harder to understand.

Comment 22 Harald Hoyer 2010-01-26 15:31:47 UTC
(In reply to comment #7)
> However, I am still missing most of my partitions:
> sda: /dev/sda1 /dev/sda2
> sdb: /dev/sdb1 /dev/sdb2 /dev/sdb5 /dev/sdb6 /dev/sdb7 /dev/sdb8
> sdc: /dev/sdc4 /dev/sdc5
> 
> Even worse /dev/sda1 is now not available at all since dmraid is turned off for
> that device.

yes, that is odd..

"rd_NO_MDIMSM rd_NO_DM" will skip both rules, which would remove partitions.

Comment 23 Harald Hoyer 2010-01-26 15:34:37 UTC
can you please provide the output of:

# /sbin/blkid -o udev -p /dev/sda
# /sbin/blkid -o udev -p /dev/sdb
# /sbin/blkid -o udev -p /dev/sdc

Comment 24 Need Real Name 2010-01-26 16:08:34 UTC
(In reply to comment #23)
> can you please provide the output of:
> 
> # /sbin/blkid -o udev -p /dev/sda
> # /sbin/blkid -o udev -p /dev/sdb
> # /sbin/blkid -o udev -p /dev/sdc    

# /sbin/blkid -o udev -p /dev/sda
ID_FS_TYPE=promise_fasttrack_raid_member
ID_FS_USAGE=raid

# /sbin/blkid -o udev -p /dev/sdb
ID_FS_VERSION=0.90.0
ID_FS_UUID=7c6a0713-3f0c-8942-b1d9-add04c76b413
ID_FS_UUID_ENC=7c6a0713-3f0c-8942-b1d9-add04c76b413
ID_FS_TYPE=linux_raid_member
ID_FS_USAGE=raid

# /sbin/blkid -o udev -p /dev/sdc
ID_FS_TYPE=promise_fasttrack_raid_member
ID_FS_USAGE=raid

Note: sda and sdb are identical 1TB SATA drives mounted on the 2 onboard SATA connectors to the Promise 20376 controller
      sdc is a 200MB PATA drive mounted on the PATA connector to the Promise   20376 controller
So, it's a little strange to me that sda and sdc have the identical output but not sda and sdb.

The kernel is mounted on /dev/sda5.

Comment 25 Need Real Name 2010-01-26 16:13:54 UTC
(In reply to comment #24)
> (In reply to comment #23)
> > can you please provide the output of:
> > 
> > # /sbin/blkid -o udev -p /dev/sda
> > # /sbin/blkid -o udev -p /dev/sdb
> > # /sbin/blkid -o udev -p /dev/sdc    
> 
> # /sbin/blkid -o udev -p /dev/sda
> ID_FS_TYPE=promise_fasttrack_raid_member
> ID_FS_USAGE=raid
> 
> # /sbin/blkid -o udev -p /dev/sdb
> ID_FS_VERSION=0.90.0
> ID_FS_UUID=7c6a0713-3f0c-8942-b1d9-add04c76b413
> ID_FS_UUID_ENC=7c6a0713-3f0c-8942-b1d9-add04c76b413
> ID_FS_TYPE=linux_raid_member
> ID_FS_USAGE=raid
> 
> # /sbin/blkid -o udev -p /dev/sdc
> ID_FS_TYPE=promise_fasttrack_raid_member
> ID_FS_USAGE=raid
> 
> Note: sda and sdb are identical 1TB SATA drives mounted on the 2 onboard SATA
> connectors to the Promise 20376 controller
>       sdc is a 200MB PATA drive mounted on the PATA connector to the Promise  
> 20376 controller
> So, it's a little strange to me that sda and sdc have the identical output but
> not sda and sdb.
> 
> The kernel is mounted on /dev/sda5.    

Just to clarify - I ran the above on the current configuration where I commented out the lines in 65-md-incremental-imsm.rules (but without adding the kernel parameter rd_NO_DM) - let me know if you had wanted this info for a different config.

Comment 26 Rich Rauenzahn 2010-02-08 16:25:54 UTC
I wonder if I'm having the same problem with my 4 disk RAID5 set....  I just upgraded to FC12 from FC11 last night.

Before udev fires up, I get this message:

md: raid4 personality registered for level 4
raid5: device sdd1 operational as raid disk 1
raid5: device sdb1 operational as raid disk 3
raid5: allocated 4221kB for md1
raid5: not enough operational devices for md1 (2/4 failed)
RAID5 conf printout:
 --- rd:4 wd:2
 disk 1, o:1, dev:sdd1
 disk 3, o:1, dev:sdb1
raid5: failed to run raid set md1
md: pers->run() failed ...
dracut: mdadm: failed to RUN_ARRAY /dev/md1: Input/output error
dracut: mdadm: Not enough devices to start the array.
dracut: Scanning devices md0 sdd sdg1  for LVM volume groups 
dracut: Reading all physical volumes. This may take a while...
dracut: Found volume group "VolGroupMedia" using metadata type lvm2
dracut: Found volume group "VolGroup00" using metadata type lvm2
dracut: 1 logical volume(s) in volume group "VolGroupMedia" now active
dracut: 14 logical volume(s) in volume group "VolGroup00" now active

When the system finishes booting, the appropriate devices /dev/sd[a-z] exist, but the partition devices do not (i.e., no /dev/sd[a-z]1).  Running partprobe after boot and reassembling the set with mdadm --re-add and manually running it works fine.

So this is some kind of scanning problem at boot where not all of my SATA drives are partprobed at boot.

My config is

PATA x 2
SATA x 6

Another odd thing, and maybe Linux is just being more agnostic these days about drive letters, but my PATA drives used to ALWAYS be sd[ab] -- now sometimes my SATA drives are sd[ab] and the PATA's get assigned later drive letters.  Luckily this isn't a problem as I have mdadm.conf and volgroups and fs labels -- but I wonder if that gives any insight into why the drives are not being scanned at boot?

Here's an example drive from the raid5 set:

[root@tendo log]# fdisk -l /dev/sdd

Disk /dev/sdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1       60801   488384001   fd  Linux raid autodetect
[root@tendo log]# 

Personalities : [raid1] [raid6] [raid5] [raid4] 
md1 : active raid5 sdc1[2] sda1[0] sdd1[1] sdb1[3]
      1465151808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md0 : active raid1 sde3[0] sdf3[1]
      116430976 blocks [2/2] [UU]
      
unused devices: <none>

(this is one of the times where the PATA's didn't come up as sda and sdb)

Comment 27 Sami Farin 2010-02-22 13:45:50 UTC
with udev-151-2, mounting a partition /dev/disk/by-id/ata-ST3160023A_XXX-part6 failed, because udev did not create any symlinks for hda , only sda.
It has worked earlier, probably for more than a year.

# grep hda6 /proc/partitions 
   3        6  126270868 hda6
# blkid -o udev /dev/hda6
#
# grep hda6 /var/log/dmesg
[    3.737588]  hda: hda1 hda2 < hda5 hda6 >
#

Comment 28 Harald Hoyer 2010-03-17 16:18:39 UTC
(In reply to comment #27)
> with udev-151-2, mounting a partition /dev/disk/by-id/ata-ST3160023A_XXX-part6
> failed, because udev did not create any symlinks for hda , only sda.
> It has worked earlier, probably for more than a year.
> 
> # grep hda6 /proc/partitions 
>    3        6  126270868 hda6
> # blkid -o udev /dev/hda6
> #
> # grep hda6 /var/log/dmesg
> [    3.737588]  hda: hda1 hda2 < hda5 hda6 >
> #    

hda? hda should be gone with recent kernels!

Comment 29 Sami Farin 2010-03-17 16:45:21 UTC
What do you mean, "gone"?  Whose idea was that?
What problem does it solve?

Comment 30 Harald Hoyer 2010-03-17 17:15:49 UTC
it's sd* now, because of the kernel layer change for ATA

Comment 31 Sami Farin 2010-03-17 19:21:34 UTC
My point was that I have only
ata-ST3320620AS* and scsi-SATA_ST3320620AS* (sda)
in /dev/disk/by-id/
but NO ata-ST3160023A* or scsi-SATA_ST3160023A* (hda) files.

So are you saying I am not supposed to see any *ST3160023A* files?

Comment 32 lav 2010-06-01 07:43:09 UTC
I also have a similar problem with missing partitions, and also with promise controller (but pata, not sata). I have rd_NO_DM in kernel command line.
Here are relevant logs:

dmesg:
pata_pdc2027x 0000:02:01.0: PLL input clock 16690 kHz
scsi6 : pata_pdc2027x
scsi7 : pata_pdc2027x
ata7: PATA max UDMA/100 mmio m65536@0xd9000000 cmd 0xd90017c0 irq 17
ata8: PATA max UDMA/100 mmio m65536@0xd9000000 cmd 0xd90015c0 irq 17
pata_pdc2027x 0000:02:02.0: PCI->APIC IRQ transform: INT A -> IRQ 18
pata_pdc2027x 0000:02:02.0: PLL input clock 16707 kHz
...
ata8.00: ATA-5: IC35L120AVVA07-0, VA6OA52A, max UDMA/100
ata8.00: 241254720 sectors, multi 0: LBA 
ata8.00: configured for UDMA/100
scsi 7:0:0:0: Direct-Access     ATA      IC35L120AVVA07-0 VA6O PQ: 0 ANSI: 5
sd 7:0:0:0: [sdd] 241254720 512-byte logical blocks: (123 GB/115 GiB)
sd 7:0:0:0: Attached scsi generic sg3 type 0
scsi 8:0:0:0: Direct-Access     ATA      IC35L120AVVA07-0 VA6O PQ: 0 ANSI: 5
sd 8:0:0:0: Attached scsi generic sg4 type 0
sd 8:0:0:0: [sde] 241254720 512-byte logical blocks: (123 GB/115 GiB)
sd 8:0:0:0: [sde] Write Protect is off
sd 8:0:0:0: [sde] Mode Sense: 00 3a 00 00
sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sde: sde1
sd 7:0:0:0: [sdd] Write Protect is off
sd 7:0:0:0: [sdd] Mode Sense: 00 3a 00 00
sd 7:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdd: sdd1
sd 8:0:0:0: [sde] Attached SCSI disk
sd 7:0:0:0: [sdd] Attached SCSI disk
dracut: Scanning devices sda2 sdb2 sdc1  for LVM volume groups 

/proc/partitions:
   8        0  732574584 sda
   8        1     112423 sda1
   8        2  732459577 sda2
   8       16 1465138584 sdb
   8       17     112423 sdb1
   8       18 1465023577 sdb2
   8       32  488385527 sdc
   8       33  488384001 sdc1
   8       64  120627360 sde
   8       48  120627360 sdd

kpartx -l output:
sde1 : 0 241254657 /dev/sde 63
sdd1 : 0 241254657 /dev/sdd 63

dmraid -r:
no raid disks

Comment 33 Bug Zapper 2010-11-04 03:08:53 UTC
This message is a reminder that Fedora 12 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 12.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '12'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 12's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 12 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 34 Bug Zapper 2010-12-04 01:33:31 UTC
Fedora 12 changed to end-of-life (EOL) status on 2010-12-02. Fedora 12 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.