Bug 505291

Summary: udev does not make all /dev/sda* entries on boot
Product: [Fedora] Fedora Reporter: Sami Knuutinen <sami.knuutinen>
Component: udevAssignee: Harald Hoyer <harald>
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: medium Docs Contact:
Priority: low    
Version: 11CC: harald, ian_milligan
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2009-06-12 12:44:37 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
dmesg showing all partitions being detected by the kernel none

Description Sami Knuutinen 2009-06-11 11:39:27 UTC
Description of problem:

After F10->F11 upgrade the udev doesn't create /dev/sda1 or /dev/sda2 entries on boot. Only the /dev/sda and /dev/sda3 entries are created. The fdisk shows the disk properly:
$ sudo fdisk -l /dev/sda

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x7ad74cf9

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14         263     2008125   82  Linux swap / Solaris
/dev/sda3             264       19457   154175805   83  Linux

After a boot the /dev/sda1 and /dev/sda2 do not exist:
$ sudo ls -al /dev/sda*
brw-rw---- 1 root disk 8, 0 11.6. 09:56 /dev/sda
brw-rw---- 1 root disk 8, 3 11.6. 09:56 /dev/sda3

If I execute and open a gparted (which I guess makes udev scan/search again) and don't do anything but close the gparted then all the sda* entries are there:
$ sudo ls -al /dev/sda*
brw-rw---- 1 root disk 8, 0 11.6. 09:56 /dev/sda
brw-rw---- 1 root disk 8, 1 11.6. 14:31 /dev/sda1
brw-rw---- 1 root disk 8, 2 11.6. 14:31 /dev/sda2
brw-rw---- 1 root disk 8, 3 11.6. 09:56 /dev/sda3


Version-Release number of selected component (if applicable):
Fedora 11
Kernel 2.6.29.4-167
udev 141-3


How reproducible:
always

Steps to Reproduce:
1. the disc has sda1, sda2 and sda3
2. boot the system
3. after the boot, only the sda3 is available
  
Actual results:
/dev/sda1 and /dev/sda2 are not available

Expected results:
all the /dev/sda* that the disc has must be available after a boot without additional tricks

Additional info:

Comment 1 Ian Milligan 2009-06-11 23:34:51 UTC
I also experienced this issue after upgrading from Fedora 10 to 11.  I have four partitions, sda1 (/boot), sda2 (swap), sda3 (/) and sdb1.  sda1, sda3 and sdb1 are all ext3.  After the upgrade the system would not boot because it could not find sda1 and sdb1 in order to fsck them.  After turning off fscking for those partitions the system booted and I discovered there were no /dev/sda1 /dev/sda2 or /dev/sdb1 device nodes, only /dev/sda, /dev/sda3 and /dev/sdb.  As can be seen in the dmesg all of these partitions were recognized.  These partitions are also listed by fdisk -l.

If I try to manually create and mount a device node with 'mknod /dev/sda1 b 8 1' followed by 'mount /dev/sda1 /boot' I receive message 'mount: /dev/sda1 is not a valid block device'.

Output of ls -al /dev/sd*:
brw-rw----. 1 root disk 8,  0 2009-06-11 15:42 /dev/sda
brw-rw----. 1 root disk 8,  3 2009-06-11 15:43 /dev/sda3
brw-rw----. 1 root disk 8, 16 2009-06-11 15:42 /dev/sdb

Output of fdisk -l /dev/sda:
Disk /dev/sda: 36.4 GB, 36420075008 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002b4e0

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14         274     2096482+  82  Linux swap / Solaris
/dev/sda3             275        4427    33358972+  83  Linux

Output of fdisk -l /dev/sdb:
Disk /dev/sdb: 146.8 GB, 146815733760 bytes
255 heads, 63 sectors/track, 17849 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002d7d2

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1       17849   143372061   83  Linux

Output of /lib/udev/vol_id /dev/sda:
ID_FS_USAGE=raid
ID_FS_TYPE=adaptec_raid_member
ID_FS_VERSION=8
ID_FS_UUID=
ID_FS_UUID_ENC=
ID_FS_LABEL=
ID_FS_LABEL_ENC=

Output of /lib/udev/vol_id /dev/sdb:
ID_FS_USAGE=raid
ID_FS_TYPE=adaptec_raid_member
ID_FS_VERSION=8
ID_FS_UUID=
ID_FS_UUID_ENC=
ID_FS_LABEL=
ID_FS_LABEL_ENC=

Output of blkid:
/dev/sda3: LABEL="/" UUID="b1c34f61-4121-487a-bf7c-b67bc0f807d4" TYPE="ext3" SEC_TYPE="ext2"

Output of uname -r:
2.6.29.4-167.fc11.i686.PAE

Output of yum list udev:
udev.i586 141-3.fc11 installed

Comment 2 Ian Milligan 2009-06-11 23:45:14 UTC
Created attachment 347498 [details]
dmesg showing all partitions being detected by the kernel

Comment 3 Harald Hoyer 2009-06-12 09:48:29 UTC
Can you directly after reboot (so while missing the /dev/sda1 and 2) do:
# ls /sys/block/sda


see also bug #504961

Comment 4 Sami Knuutinen 2009-06-12 12:33:12 UTC
This is a duplicate of the 504961. The problem was caused by that the disk (once) was part of a raidset on a promise sata controller and when it was re-used, the metadata wasn't removed.

The workaround is to use nodmraid on kernel command line when booting. 

The permanent fix is to remove the raid metadata with either promise BIOS or with dmraid -r -E.

Comment 5 Harald Hoyer 2009-06-12 12:44:37 UTC

*** This bug has been marked as a duplicate of bug 504961 ***