Description of problem: kernel 3.7.7 doesn't recgognize lvm partitons on dmraid Version-Release number of selected component (if applicable): 3.7.7 How reproducible: reboot Steps to Reproduce: 1. reboot with updated kernel 2. 3. Actual results: don't mount dmraid and lvm Expected results: mount dmraid and lvm Additional info: ESB builtin intel sata raid technology II using ddf1_xxxx
What was the last kernel that worked for you?
...are there any error/warning messages logged during the boot? Also see /run/log/messages or journalctl...
In general, DDF Raid sets should be activated via mdadm in F18.
the last kernel that works for me is 3.6.6 as soon as I updated it to 3.6.7 lvm on dmraid failed at boot, more precisely it booted but on /dev/sda table, not ddf1_xxx. I spent 3 weeks to know what it's changed on kernel and even asked to dmraid developers they said nothing changed. the only error I can see on logs are the lvm are trying to mount but system says "device lookup error". I suspect this message due to the fact that lvm partitions are already mounted from /dev/sda Heinz, do you mean I must disable dmraid on kernel ? I suspect the behaviour of systemd since at boot multiple time services like udev, systemd, dracut etc.. want to mount and umount partitions. there is something non logical in all these procedures. mount lvm on fakeraid must be simple, why all this for nothing working at the end ? thanks
I think the most simple for you is I can give to you an access to my server. being linux expert it's the first time of my experience I'm facing of this unsolvable problem. It will be more easy for you to undertsand what's happening there
Please attach error logs you could collect about this issue. (/var/log/message, kernel output, lvm -vvvv)
Created attachment 705485 [details] boot 3.6.6
Created attachment 705487 [details] lvm pvscan 3.6.6
Created attachment 705488 [details] message 3.6.6
Created attachment 705492 [details] boot 3.8.1
Created attachment 705493 [details] lvm pvscan 3.8.1
Created attachment 705494 [details] message 3.8.1
I just updated F18. with 3.8.1 kernel now the behavior is different. Now at boot it fails in dracut saying that all LVM volumes don't exist. I had to do dmraid -ay and mount -a then exit dracut to continue to boot
Have you tried to use mdraid instead of dmraid ? Seems like you ddf array is mapped via /dev/mapper/ddf_
how to control that at boot ? shouldn't be dracut, systemd to do that ? tell me how to replace dmraid by mdraid in systemd/udevd I joined lshw of of a working server with SIL card and a faulty server with intel builtin 681ESB/682ESB sata raid
Created attachment 705529 [details] server with SIL SATA card working
Created attachment 705530 [details] server with builtin intel 681ESB/682ESB sata raid faulty
maybe some kernel parameters would help ?
interesting article http://kevinmccaughey.org/?p=182 i'm trying it now
doesn't seem to work for me as I use LVM partitions I just updated another server in F18 with ADAPTEC 1420S SATA II Ccard and this time it fails with any kernel to dracut and I have to do dmraid -ay everytime.
LVM partitions are on top of md/dmraid device - thus it should not matter. You simply activate them after you active lower level raid device. How does your mdadm configuration looks like ? Any mdadm related errors ? You must have mdraid running and recreate your initramdisk - so dracut knows about mdraid.
mdadm has no partitions set as I don't use software raid but raid1 from intel ICH, adaptec and sil24 as Programer researcher, I have absolutely no idea of how to remove dmraid and use mdadm for so called fakeraid. I only expected to update my kernels on all my nodes as usual, but everything failed at this time. it would be very useful if you provide any tutorial of how to set mdadm instead of dmraid and configure it for fakeraid raid1.
the thing that is absolutely sure now is I can't reformat my disk again.
(In reply to comment #18) > maybe some kernel parameters would help ? Have you tried the "nodmraid" kernel option yet?
I guess it's the same case as Bug 916231. DDF is not yet supported by dracut.
> Have you tried the "nodmraid" kernel option yet yes, but LVM partitions doesn't exist after that > DDF is not yet supported by dracut so how do you explain that I was able to mount it until kernel 3.6.6 what the best solution at this time ? I have 10 servers with ddf. Thanks
btw I can't access to Bug 916231
maybe this patch would be relevant ? https://bugzilla.redhat.com/show_bug.cgi?id=862085
I just installed F18 on another server that use ASR dmraid and now at boot I get dracut fail with: ERROR dos partition address past end of raid device however I'm able to mount all lvm patitions if I boot from the live CD
ASR_ is not recognized at boot, even if I use nodmraid. the thing I don't understand is DDF and ASR are recognized if I boot with a F18 live CD. so why not at HD boot ?
it seems that even with nodmraid systemd starts fakeraid service. More it starts it after lvm and udev services, which is not logical. I think the problem is coming from systemd do you think it can be resolved ? I have now 5 servers stucked with that. I can give full access to my servers, if things can be easier for you Thanks
FYI there is absolutely no problem with Fedora 17 standard install
Please tell me what's the temporarly solution that allows me to continue to work on my server and reboot without pain. thanks
Does it work, if you remove "rd.dm.uuid=ddf1_4c5349202020202080862682000000004711471100001450" from the kernel command line?
it works with kernel 3.6.6 but not others recent until 3.8.2
I found an interesting article http://forums.gentoo.org/viewtopic-t-888520.html where mdadm can replace dmraid. BUT, if I follow instructions (domdadm nodmraid at kernel live cd F18-CFXE boot) mdadm --detail-platform didn't detect any hardware raid (neither ddf1 nor asf)
Good news. ASR_xxx firmware raid works again with kernel 3.8.3 DDF_xxx firmware raid still fails into dracut with a need to run dmraid -ay manually and mount -a then exit to mount everything
built in raidhost ICH9 intel using DDF1 seems to work also with kernel 3.8.3 now I can see mdadm recognizing ddf as /dev/md127 and bridge the fakeraid as /dev/md126 about ASR it works again but only with dmraid
Concerning adpatec pci card (like 1420SA) with kernel 3.8.3 using DDF it still fails on dracut and needed to use dmraid -ay mount -a exit to boot correctly
Problem with kernel 3.8.3 and intel builtin hostraid: odd behavior of mdamd that wants to rebuild the raid1 array (/dev/md126). but why if intel chipset doest it automatically ? the result is when I reboot, the firmware raid array is destroyed so I every time I have to go into the intel bios and recreate an array. thanks
update kernel 3.8.3-203: [root@node142 ~]# mdadm --detail-platform mdadm: imsm capabilities not found for controller: /sys/devices/pci0000:00/0000:00:1f.2 (type SATA) I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2 (SATA) however /dev/md126 /dev/md126p1 /dev/md127 are created, [root@node142 ~]# cat /proc/mdstat Personalities : [raid1] md126 : active raid1 sda[1] sdb[0] 487304192 blocks super external:/md127/0 [2/2] [UU] [==>..................] resync = 10.1% (49514112/487304192) finish=99.6min speed=73234K/sec md127 : inactive sda[1](S) sdb[0](S) 2164784 blocks super external:ddf unused devices: <none> but when I reboot, raid array in bios disappeared so impossible to boot on grub2
[root@node142 ~]# grub2-install /dev/md/ddf0 /usr/sbin/grub2-bios-setup: error: disk `mduuid/8c33f0c1dabf54690214f6fd6f5ae451' not found. how to correct this ?
I tried also to compile my own kernel 3.8.3 and at boot it fails into dracut.
Hi, I just update my servers with kernel 3.9.4 and now ASR_xxx dmraid drivers are working in LVM partitions as well as SIL_xxx. But DDF1_xxx still doesn't working when boot and root partitions are in LVM. however DDF1_xxx works when bios raid is not DDF1_xxx for boot and/or root partition and (for example) home partition is DDF1_xxx.
This message is a reminder that Fedora 18 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 18. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '18'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 18's end of life. Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 18 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior to Fedora 18's end of life. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 18 changed to end-of-life (EOL) status on 2014-01-14. Fedora 18 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.