Description of problem: dmraid array is inactive since installation of lvm2-2.02.39-7.fc10.i386 Entries in /dev/mapper for sil_aibhacaicccd set are no longer present. Raid group is inactive. kernel-PAE-2.6.27.7-134.fc10.i686 will not boot with a root=/dev/VolGroup00/LogVol00. There are errors complaining about duplicate UUID entries and instead of using the raid set, it boots with a root of /dev/sdb2. That is how I found the problem. There is no kernel-PAE-2.6.27.7-134.fc10.i686 on /dev/sdb2, so the system falls back to kernel-PAE-2.6.27.5-117.fc10.i686. kernel-PAE-2.6.27.5-117.fc10.i686 will boot with a root=/dev/VolGroup00/LogVol00. It also complains about the set 253,0 not being defined. It then boots from /dev/sda2 Version-Release number of selected component (if applicable): dmraid-1.0.0.rc15-2.fc10.i386 lvm2-2.02.39-7.fc10.i386 Additional info: dmraid -r /dev/sdb: sil, "sil_aibhacaicccd", mirror, ok, 781420720 sectors, data@ 0 /dev/sda: sil, "sil_aibhacaicccd", mirror, ok, 781420720 sectors, data@ 0 dmraid -s *** Set name : sil_aibhacaicccd size : 781420720 stride : 0 type : mirror status : ok subsets: 0 devs : 2 spares : 0 dmraid -dsactive DEBUG: _find_set: searching sil_aibhacaicccd DEBUG: _find_set: not found sil_aibhacaicccd DEBUG: _find_set: searching sil_aibhacaicccd DEBUG: _find_set: not found sil_aibhacaicccd DEBUG: _find_set: searching sil_aibhacaicccd DEBUG: _find_set: found sil_aibhacaicccd DEBUG: _find_set: searching sil_aibhacaicccd DEBUG: _find_set: found sil_aibhacaicccd DEBUG: checking sil device "/dev/sda" DEBUG: checking sil device "/dev/sdb" DEBUG: set status of set "sil_aibhacaicccd" to 16 DEBUG: freeing devices of RAID set "sil_aibhacaicccd" DEBUG: freeing device "sil_aibhacaicccd", path "/dev/sda" DEBUG: freeing device "sil_aibhacaicccd", path "/dev/sdb" dmraid -dsinactive DEBUG: _find_set: searching sil_aibhacaicccd DEBUG: _find_set: not found sil_aibhacaicccd DEBUG: _find_set: searching sil_aibhacaicccd DEBUG: _find_set: not found sil_aibhacaicccd DEBUG: _find_set: searching sil_aibhacaicccd DEBUG: _find_set: found sil_aibhacaicccd DEBUG: _find_set: searching sil_aibhacaicccd DEBUG: _find_set: found sil_aibhacaicccd DEBUG: checking sil device "/dev/sda" DEBUG: checking sil device "/dev/sdb" DEBUG: set status of set "sil_aibhacaicccd" to 16 *** Set name : sil_aibhacaicccd size : 781420720 stride : 0 type : mirror status : ok subsets: 0 devs : 2 spares : 0 DEBUG: freeing devices of RAID set "sil_aibhacaicccd" DEBUG: freeing device "sil_aibhacaicccd", path "/dev/sda" DEBUG: freeing device "sil_aibhacaicccd", path "/dev/sdb" l /dev/dm-* brw-rw---- 1 root disk 253, 0 2008-12-13 04:44 /dev/dm-0 brw-rw---- 1 root disk 253, 1 2008-12-13 04:44 /dev/dm-1 brw-rw---- 1 root disk 253, 2 2008-12-13 04:44 /dev/dm-2 brw-rw---- 1 root disk 253, 3 2008-12-13 04:45 /dev/dm-3 l /dev/mapper total 0 drwxr-xr-x 2 root root 140 2008-12-13 04:45 ./ drwxr-xr-x 18 root root 4480 2008-12-13 04:46 ../ crw-rw---- 1 root root 10, 63 2008-12-13 04:44 control brw-rw---- 1 root disk 253, 3 2008-12-13 04:45 VGforMyth-video brw-rw---- 1 root disk 253, 0 2008-12-13 04:45 VolGroup00-LogVol00 brw-rw---- 1 root disk 253, 2 2008-12-13 04:44 VolGroup00-LogVol01 brw-rw---- 1 root disk 253, 1 2008-12-13 04:45 VolGroup00-LogVol02 l /dev/VolGroup00/ total 0 drwx------ 2 root root 100 2008-12-13 04:44 ./ drwxr-xr-x 18 root root 4480 2008-12-13 04:46 ../ lrwxrwxrwx 1 root root 31 2008-12-13 04:44 LogVol00 -> /dev/mapper/VolGroup00-LogVol00 lrwxrwxrwx 1 root root 31 2008-12-13 04:44 LogVol01 -> /dev/mapper/VolGroup00-LogVol01 lrwxrwxrwx 1 root root 31 2008-12-13 04:44 LogVol02 -> /dev/mapper/VolGroup00-LogVol02 With kernel-PAE-2.6.27.5-117.fc10.i686 booted root=/dev/VolGroup00/LogVol00 l /boot total 17430 drwxr-xr-x 5 root root 1024 2008-12-10 23:36 ./ drwxr-xr-x 24 root root 4096 2008-12-13 04:45 ../ -rw-r--r-- 1 root root 90977 2008-11-18 12:17 config-2.6.27.5-117.fc10.i686.PAE -rw-r--r-- 1 root root 91017 2008-12-01 22:40 config-2.6.27.7-134.fc10.i686.PAE drwxr-xr-x 3 root root 1024 2008-11-29 13:42 efi/ drwxr-xr-x 2 root root 1024 2008-12-11 01:23 grub/ -rw------- 1 root root 3846631 2008-11-29 13:44 initrd-2.6.27.5-117.fc10.i686.PAE.img -rw-r--r-- 1 root root 2247043 2008-11-29 21:18 initrd-2.6.27.5-117.fc10.i686.PAEkdump.img -rw------- 1 root root 3891124 2008-12-10 23:36 initrd-2.6.27.7-134.fc10.i686.PAE.img drwx------ 2 root root 12288 2008-11-29 13:30 lost+found/ -rw-r--r-- 1 root root 116496 2008-11-17 22:57 memtest86+-2.10 -rw-r--r-- 1 root root 1106709 2008-11-18 12:17 System.map-2.6.27.5-117.fc10.i686.PAE -rw-r--r-- 1 root root 1106865 2008-12-01 22:40 System.map-2.6.27.7-134.fc10.i686.PAE -rwxr-xr-x 1 root root 2615056 2008-11-18 12:17 vmlinuz-2.6.27.5-117.fc10.i686.PAE* -rwxr-xr-x 1 root root 2614992 2008-12-01 22:40 vmlinuz-2.6.27.7-134.fc10.i686.PAE*
This looks either like a bogus initrd (please attach yours for analysis) -or- a know nash flaw preventing the RAID set from being activated.
Created attachment 326982 [details] initrd-2.6.27.5-117.fc10.i686.PAE.img
Created attachment 326983 [details] initrd-2.6.27.5-117.fc10.i686.PAEkdump.img
Created attachment 326984 [details] initrd-2.6.27.7-134.fc10.i686.PAE.img
I added three files. Two initrd files; one for the older kernel that boots from /dev/sda and one for the newer kernel which tries to boot from /dev/sdb and fails, and a dump file for the older kernel which does boot.
The dmraid call missing completely from -134 initrd.img... initrd-2.6.27.5-117.fc10.i686.PAE.img: modprobe -q sata_sil echo Waiting for driver initialization. stabilized --hash --interval 250 /proc/scsi/scsi echo Making device-mapper control node mkdmnod mkblkdevs rmparts sdc rmparts sdb dm create sil_aibhacaicccd 0 781420720 mirror core 2 131072 nosync 2 8:16 0 8:32 0 1 handle_errors dm partadd sil_aibhacaicccd echo Scanning logical volumes lvm vgscan --ignorelockingfailure echo Activating logical volumes lvm vgchange -ay --ignorelockingfailure VolGroup00 initrd-2.6.27.7-134.fc10.i686.PAE.img: modprobe -q sata_sil echo Waiting for driver initialization. stabilized --hash --interval 250 /proc/scsi/scsi echo Making device-mapper control node mkdmnod mkblkdevs echo Scanning logical volumes lvm vgscan --ignorelockingfailure echo Activating logical volumes lvm vgchange -ay --ignorelockingfailure VolGroup00
Milan, the init file you listed has the old style "dm create" calls still in instead of "dmraid -ay ..." but should work. Johns 1st dump (comment#2) has "dm create ..." in and *should* work whereas the 2nd one (comment#3) doesn't have neither dm nor dmraid in and will fail. The 3rd one (comment#4): same issue as with the 2nd initrd. mkinitrd seems to fail creating a proper initrd. Changing component to mkinitrd.
What versions of mkinitrd were used for creating the two initrds?
The version that is installed is mkinitrd-6.0.71-2.fc10.i386
Anything going on with this? I am really in a quandary. I moved from Fedora 8; stable, reliable, tried and true, to Fedora 9 which was, well to be kind, premature in my opinion, and then to Fedora 10 in the hopes that it had been debugged a bit better than F9 before being released. I now have real doubts. I know Fedora is free and the phrase, "you get what you pay for" often is true, but Fedora has always been a good distribution in my opinion until 9 and I am getting a bad feeling about 10. Two new versions of display managers at the same time, Gnome and KDE, and neither is really ready. Yes, I know this is a development environment. I used to promote Fedora to my friends. I don't do that now. Should I go back to Fedora 8? I know it is end of life, but it works.
Thanks to all for their hard work, but I gave up on dmraid. Now have a spare 400 G drive unplugged and un-powered. Seems dmraid and LVM should be combined as a project rather than be separate. Then they probably would not seem to be at cross purposes. I was trying to have a raid backup of my entire boot drive including the grub and boot areas. I could never seem to get that to work no matter what. I tried at least 7 fresh installations trying various combinations; raid and lvm, raid only, etc. I finally just gave up on raid. I guess I'll have to buy an Adaptec controller and use hardware raid. Anyway, I can't do any testing on this. I had to get a system that was usable. LVM has been stable for a while now. dmraid ... I hope it gets where dmraid and LVM work together. One other point. As is often the case, documentation for dmraid is not very prolific. The man page is about it and does not really cut it for someone just getting started. It seems to be a reasonable reference, but a detailed HowTo is needed for Fedora that is up to date and covers interactions with LVM. Repeating myself; I think LVM and dmraid projects should combine to work synergisticly. Hope all had a Merry Christmas and a Happy New Year and that the new year brings joy to you all.
(In reply to comment #11) > Thanks to all for their hard work, but I gave up on dmraid. Now have a spare > 400 G drive unplugged and un-powered. > John, I'm currently working on trying to fix various dmraid related issues in mkinitrd, I assume from the above comment that you are not interested (since you've removed the disk an all) to test those fixes, if you are take a look at bug 476818, which I'm using as a tracker for all F-10 dmraid issues. *** This bug has been marked as a duplicate of bug 476818 ***