Bug 476366

Summary: dmraid and LVM not working together mapper raid definitions lost
Product: [Fedora] Fedora Reporter: John Griffiths <fedora.jrg01>
Component: mkinitrdAssignee: Peter Jones <pjones>
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: urgent Docs Contact:
Priority: low    
Version: 10CC: agk, bmr, dcantrell, dwysocha, hdegoede, heinzm, katzj, lvm-team, mbroz, pjones, prockai, wtogami
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2009-02-03 10:56:40 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
initrd-2.6.27.5-117.fc10.i686.PAE.img
none
initrd-2.6.27.5-117.fc10.i686.PAEkdump.img
none
initrd-2.6.27.7-134.fc10.i686.PAE.img none

Description John Griffiths 2008-12-13 17:18:02 UTC
Description of problem:
dmraid array is inactive since installation of lvm2-2.02.39-7.fc10.i386
Entries in /dev/mapper for sil_aibhacaicccd set are no longer present.
Raid group is inactive.

kernel-PAE-2.6.27.7-134.fc10.i686 will not boot with a root=/dev/VolGroup00/LogVol00. There are errors complaining about duplicate UUID entries and instead of using the raid set, it boots with a root of /dev/sdb2. That is how I found the problem. There is no kernel-PAE-2.6.27.7-134.fc10.i686 on /dev/sdb2, so the system falls back to kernel-PAE-2.6.27.5-117.fc10.i686.

kernel-PAE-2.6.27.5-117.fc10.i686 will boot with a root=/dev/VolGroup00/LogVol00. It also complains about the set 253,0 not being defined. It then boots from /dev/sda2

Version-Release number of selected component (if applicable):
dmraid-1.0.0.rc15-2.fc10.i386
lvm2-2.02.39-7.fc10.i386

Additional info: 

dmraid -r
/dev/sdb: sil, "sil_aibhacaicccd", mirror, ok, 781420720 sectors, data@ 0
/dev/sda: sil, "sil_aibhacaicccd", mirror, ok, 781420720 sectors, data@ 0

dmraid -s
*** Set
name   : sil_aibhacaicccd
size   : 781420720
stride : 0
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0

dmraid -dsactive
DEBUG: _find_set: searching sil_aibhacaicccd
DEBUG: _find_set: not found sil_aibhacaicccd
DEBUG: _find_set: searching sil_aibhacaicccd
DEBUG: _find_set: not found sil_aibhacaicccd
DEBUG: _find_set: searching sil_aibhacaicccd
DEBUG: _find_set: found sil_aibhacaicccd
DEBUG: _find_set: searching sil_aibhacaicccd
DEBUG: _find_set: found sil_aibhacaicccd
DEBUG: checking sil device "/dev/sda"
DEBUG: checking sil device "/dev/sdb"
DEBUG: set status of set "sil_aibhacaicccd" to 16
DEBUG: freeing devices of RAID set "sil_aibhacaicccd"
DEBUG: freeing device "sil_aibhacaicccd", path "/dev/sda"
DEBUG: freeing device "sil_aibhacaicccd", path "/dev/sdb"

dmraid -dsinactive
DEBUG: _find_set: searching sil_aibhacaicccd
DEBUG: _find_set: not found sil_aibhacaicccd
DEBUG: _find_set: searching sil_aibhacaicccd
DEBUG: _find_set: not found sil_aibhacaicccd
DEBUG: _find_set: searching sil_aibhacaicccd
DEBUG: _find_set: found sil_aibhacaicccd
DEBUG: _find_set: searching sil_aibhacaicccd
DEBUG: _find_set: found sil_aibhacaicccd
DEBUG: checking sil device "/dev/sda"
DEBUG: checking sil device "/dev/sdb"
DEBUG: set status of set "sil_aibhacaicccd" to 16
*** Set
name   : sil_aibhacaicccd
size   : 781420720
stride : 0
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0
DEBUG: freeing devices of RAID set "sil_aibhacaicccd"
DEBUG: freeing device "sil_aibhacaicccd", path "/dev/sda"
DEBUG: freeing device "sil_aibhacaicccd", path "/dev/sdb"

l /dev/dm-*
brw-rw---- 1 root disk 253, 0 2008-12-13 04:44 /dev/dm-0
brw-rw---- 1 root disk 253, 1 2008-12-13 04:44 /dev/dm-1
brw-rw---- 1 root disk 253, 2 2008-12-13 04:44 /dev/dm-2
brw-rw---- 1 root disk 253, 3 2008-12-13 04:45 /dev/dm-3

l /dev/mapper
total 0
drwxr-xr-x  2 root root     140 2008-12-13 04:45 ./
drwxr-xr-x 18 root root    4480 2008-12-13 04:46 ../
crw-rw----  1 root root  10, 63 2008-12-13 04:44 control
brw-rw----  1 root disk 253,  3 2008-12-13 04:45 VGforMyth-video
brw-rw----  1 root disk 253,  0 2008-12-13 04:45 VolGroup00-LogVol00
brw-rw----  1 root disk 253,  2 2008-12-13 04:44 VolGroup00-LogVol01
brw-rw----  1 root disk 253,  1 2008-12-13 04:45 VolGroup00-LogVol02

l /dev/VolGroup00/
total 0
drwx------  2 root root  100 2008-12-13 04:44 ./
drwxr-xr-x 18 root root 4480 2008-12-13 04:46 ../
lrwxrwxrwx  1 root root   31 2008-12-13 04:44 LogVol00 -> /dev/mapper/VolGroup00-LogVol00
lrwxrwxrwx  1 root root   31 2008-12-13 04:44 LogVol01 -> /dev/mapper/VolGroup00-LogVol01
lrwxrwxrwx  1 root root   31 2008-12-13 04:44 LogVol02 -> /dev/mapper/VolGroup00-LogVol02


With kernel-PAE-2.6.27.5-117.fc10.i686 booted root=/dev/VolGroup00/LogVol00
l /boot
total 17430
drwxr-xr-x  5 root root    1024 2008-12-10 23:36 ./
drwxr-xr-x 24 root root    4096 2008-12-13 04:45 ../
-rw-r--r--  1 root root   90977 2008-11-18 12:17 config-2.6.27.5-117.fc10.i686.PAE
-rw-r--r--  1 root root   91017 2008-12-01 22:40 config-2.6.27.7-134.fc10.i686.PAE
drwxr-xr-x  3 root root    1024 2008-11-29 13:42 efi/
drwxr-xr-x  2 root root    1024 2008-12-11 01:23 grub/
-rw-------  1 root root 3846631 2008-11-29 13:44 initrd-2.6.27.5-117.fc10.i686.PAE.img
-rw-r--r--  1 root root 2247043 2008-11-29 21:18 initrd-2.6.27.5-117.fc10.i686.PAEkdump.img
-rw-------  1 root root 3891124 2008-12-10 23:36 initrd-2.6.27.7-134.fc10.i686.PAE.img
drwx------  2 root root   12288 2008-11-29 13:30 lost+found/
-rw-r--r--  1 root root  116496 2008-11-17 22:57 memtest86+-2.10
-rw-r--r--  1 root root 1106709 2008-11-18 12:17 System.map-2.6.27.5-117.fc10.i686.PAE
-rw-r--r--  1 root root 1106865 2008-12-01 22:40 System.map-2.6.27.7-134.fc10.i686.PAE
-rwxr-xr-x  1 root root 2615056 2008-11-18 12:17 vmlinuz-2.6.27.5-117.fc10.i686.PAE*
-rwxr-xr-x  1 root root 2614992 2008-12-01 22:40 vmlinuz-2.6.27.7-134.fc10.i686.PAE*

Comment 1 Heinz Mauelshagen 2008-12-15 15:12:56 UTC
This looks either like a bogus initrd (please attach yours for analysis) -or- a know nash flaw preventing the RAID set from being activated.

Comment 2 John Griffiths 2008-12-15 15:46:39 UTC
Created attachment 326982 [details]
initrd-2.6.27.5-117.fc10.i686.PAE.img

Comment 3 John Griffiths 2008-12-15 15:48:06 UTC
Created attachment 326983 [details]
initrd-2.6.27.5-117.fc10.i686.PAEkdump.img

Comment 4 John Griffiths 2008-12-15 15:49:49 UTC
Created attachment 326984 [details]
initrd-2.6.27.7-134.fc10.i686.PAE.img

Comment 5 John Griffiths 2008-12-15 16:04:22 UTC
I added three files. Two initrd files; one for the older kernel that boots from /dev/sda and one for the newer kernel which tries to boot from /dev/sdb and fails, and a dump file for the older kernel which does boot.

Comment 6 Milan Broz 2008-12-15 16:09:26 UTC
The dmraid call missing completely from -134 initrd.img...

initrd-2.6.27.5-117.fc10.i686.PAE.img:
modprobe -q sata_sil
echo Waiting for driver initialization.
stabilized --hash --interval 250 /proc/scsi/scsi
echo Making device-mapper control node
mkdmnod
mkblkdevs
rmparts sdc
rmparts sdb
dm create sil_aibhacaicccd 0 781420720 mirror core 2 131072 nosync 2 8:16 0 8:32 0 1 handle_errors
dm partadd sil_aibhacaicccd
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure  VolGroup00

initrd-2.6.27.7-134.fc10.i686.PAE.img:
modprobe -q sata_sil
echo Waiting for driver initialization.
stabilized --hash --interval 250 /proc/scsi/scsi
echo Making device-mapper control node
mkdmnod
mkblkdevs
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure  VolGroup00

Comment 7 Heinz Mauelshagen 2008-12-15 16:39:25 UTC
Milan,

the init file you listed has the old style "dm create" calls still in instead of "dmraid -ay ..." but should work.

Johns 1st dump (comment#2) has "dm create ..." in and *should* work whereas
  the 2nd one  (comment#3) doesn't have neither dm nor dmraid in and will fail.

The 3rd one  (comment#4): same issue as with the 2nd initrd.

mkinitrd seems to fail creating a proper initrd.

Changing component to mkinitrd.

Comment 8 Peter Jones 2008-12-15 16:49:57 UTC
What versions of mkinitrd were used for creating the two initrds?

Comment 9 John Griffiths 2008-12-15 17:29:22 UTC
The version that is installed is mkinitrd-6.0.71-2.fc10.i386

Comment 10 John Griffiths 2008-12-20 21:24:05 UTC
Anything going on with this?

I am really in a quandary. I moved from Fedora 8; stable, reliable, tried and true, to Fedora 9 which was, well to be kind, premature in my opinion, and then to Fedora 10 in the hopes that it had been debugged a bit better than F9 before being released. I now have real doubts.

I know Fedora is free and the phrase, "you get what you pay for" often is true, but Fedora has always been a good distribution in my opinion until 9 and I am getting a bad feeling about 10. Two new versions of display managers at the same time, Gnome and KDE, and neither is really ready. Yes, I know this is a development environment. I used to promote Fedora to my friends. I don't do that now.

Should I go back to Fedora 8? I know it is end of life, but it works.

Comment 11 John Griffiths 2009-01-02 17:07:53 UTC
Thanks to all for their hard work, but I gave up on dmraid. Now have a spare 400 G drive unplugged and un-powered.

Seems dmraid and LVM should be combined as a project rather than be separate. Then they probably would not seem to be at cross purposes.

I was trying to have a raid backup of my entire boot drive including the grub and boot areas. I could never seem to get that to work no matter what. I tried at least 7 fresh installations trying various combinations; raid and lvm, raid only, etc. I finally just gave up on raid. I guess I'll have to buy an Adaptec controller and use hardware raid.

Anyway, I can't do any testing on this. I had to get a system that was usable.

LVM has been stable for a while now. dmraid ... I hope it gets where dmraid and LVM work together.

One other point. As is often the case, documentation for dmraid is not very prolific. The man page is about it and does not really cut it for someone just getting started. It seems to be a reasonable reference, but a detailed HowTo is needed for Fedora that is up to date and covers interactions with LVM.

Repeating myself; I think LVM and dmraid projects should combine to work synergisticly. 

Hope all had a Merry Christmas and a Happy New Year and that the new year brings joy to you all.

Comment 12 Hans de Goede 2009-02-03 10:56:40 UTC
(In reply to comment #11)
> Thanks to all for their hard work, but I gave up on dmraid. Now have a spare
> 400 G drive unplugged and un-powered.
> 

John,

I'm currently working on trying to fix various dmraid related issues in mkinitrd, I assume from the above comment that you are not interested (since you've removed the disk an all) to test those fixes, if you are take a look at bug 476818, which I'm using as a tracker for all F-10 dmraid issues.

*** This bug has been marked as a duplicate of bug 476818 ***