Bug 911982 - kernel 3.8.3 lvm and dmraid
Summary: kernel 3.8.3 lvm and dmraid
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: dmraid
Version: 18
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: LVM and device-mapper development team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-02-17 03:56 UTC by Franck C.
Modified: 2014-02-05 19:13 UTC (History)
11 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2014-02-05 19:13:49 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
boot 3.6.6 (8.30 KB, text/plain)
2013-03-05 14:38 UTC, Franck C.
no flags Details
lvm pvscan 3.6.6 (19.67 KB, text/plain)
2013-03-05 14:39 UTC, Franck C.
no flags Details
message 3.6.6 (161.88 KB, text/plain)
2013-03-05 14:39 UTC, Franck C.
no flags Details
boot 3.8.1 (8.30 KB, text/plain)
2013-03-05 14:53 UTC, Franck C.
no flags Details
lvm pvscan 3.8.1 (19.83 KB, text/plain)
2013-03-05 14:54 UTC, Franck C.
no flags Details
message 3.8.1 (297.62 KB, text/plain)
2013-03-05 14:55 UTC, Franck C.
no flags Details
server with SIL SATA card working (17.88 KB, text/plain)
2013-03-05 15:52 UTC, Franck C.
no flags Details
server with builtin intel 681ESB/682ESB sata raid faulty (23.64 KB, text/plain)
2013-03-05 15:53 UTC, Franck C.
no flags Details

Description Franck C. 2013-02-17 03:56:08 UTC
Description of problem:
kernel 3.7.7 doesn't recgognize lvm partitons on dmraid

Version-Release number of selected component (if applicable):
3.7.7

How reproducible:
reboot

Steps to Reproduce:
1. reboot with updated kernel
2.
3.
  
Actual results:
don't mount dmraid and lvm

Expected results:
mount dmraid and lvm

Additional info:

ESB builtin intel sata raid technology II
using ddf1_xxxx

Comment 1 Bill Nottingham 2013-02-20 11:02:22 UTC
What was the last kernel that worked for you?

Comment 2 Peter Rajnoha 2013-02-20 12:00:04 UTC
...are there any error/warning messages logged during the boot? Also see /run/log/messages or journalctl...

Comment 3 Heinz Mauelshagen 2013-02-20 13:38:49 UTC
In general, DDF Raid sets should be activated via mdadm in F18.

Comment 4 Franck C. 2013-02-20 14:13:40 UTC
the last kernel that works for me is 3.6.6
as soon as I updated it to 3.6.7 lvm on dmraid failed at boot, more precisely it booted but on /dev/sda table, not ddf1_xxx. I spent 3 weeks to know what it's changed on kernel and even asked to dmraid developers they said nothing changed.

the only error I can see on logs are the lvm are trying to mount but system says
"device lookup error". I suspect this message due to the fact that lvm partitions are already mounted from /dev/sda

Heinz, do you mean I must disable dmraid on kernel ?

I suspect the behaviour of systemd since at boot multiple time services like udev, systemd, dracut etc.. want to mount and umount partitions.

there is something non logical in all these procedures. mount lvm on fakeraid must be simple, why all this for nothing working at the end ?

thanks

Comment 5 Franck C. 2013-02-20 14:17:50 UTC
I think the most simple for you is I can give to you an access to my server.
being linux expert it's the first time of my experience I'm facing of this unsolvable problem. It will be more easy for you to undertsand what's happening there

Comment 6 Zdenek Kabelac 2013-03-05 09:36:00 UTC
Please attach error logs you could collect about this issue.
(/var/log/message, kernel output,  lvm -vvvv)

Comment 7 Franck C. 2013-03-05 14:38:27 UTC
Created attachment 705485 [details]
boot 3.6.6

Comment 8 Franck C. 2013-03-05 14:39:14 UTC
Created attachment 705487 [details]
lvm pvscan 3.6.6

Comment 9 Franck C. 2013-03-05 14:39:52 UTC
Created attachment 705488 [details]
message 3.6.6

Comment 10 Franck C. 2013-03-05 14:53:52 UTC
Created attachment 705492 [details]
boot 3.8.1

Comment 11 Franck C. 2013-03-05 14:54:21 UTC
Created attachment 705493 [details]
lvm pvscan 3.8.1

Comment 12 Franck C. 2013-03-05 14:55:09 UTC
Created attachment 705494 [details]
message 3.8.1

Comment 13 Franck C. 2013-03-05 14:57:24 UTC
I just updated F18. with 3.8.1 kernel now the behavior is different.
Now at boot it fails in dracut saying that all LVM volumes don't exist.
I had to do dmraid -ay and mount -a then exit dracut to continue to boot

Comment 14 Zdenek Kabelac 2013-03-05 15:34:45 UTC
Have you tried to use   mdraid   instead of dmraid ?
Seems like you ddf array is mapped via /dev/mapper/ddf_

Comment 15 Franck C. 2013-03-05 15:52:13 UTC
how to control that at boot ? shouldn't be dracut, systemd to do that ?
tell me how to replace dmraid by mdraid in systemd/udevd

I joined lshw of of a working server with SIL card and a faulty server with intel builtin 681ESB/682ESB sata raid

Comment 16 Franck C. 2013-03-05 15:52:47 UTC
Created attachment 705529 [details]
server with SIL SATA card working

Comment 17 Franck C. 2013-03-05 15:53:32 UTC
Created attachment 705530 [details]
server with builtin intel 681ESB/682ESB sata raid faulty

Comment 18 Franck C. 2013-03-06 17:33:28 UTC
maybe some kernel parameters would help ?

Comment 19 Franck C. 2013-03-06 21:58:45 UTC
interesting article
http://kevinmccaughey.org/?p=182
i'm trying it now

Comment 20 Franck C. 2013-03-06 23:37:23 UTC
doesn't seem to work for me as I use LVM partitions
I just updated another server in F18 with ADAPTEC 1420S SATA II Ccard
and this time it fails with any kernel to dracut and I have to do dmraid -ay everytime.

Comment 21 Zdenek Kabelac 2013-03-07 10:40:08 UTC
LVM partitions are on top of md/dmraid device - thus it should not matter.
You simply activate them after you active lower level raid device.

How does your mdadm configuration looks like ?

Any mdadm related errors ?

You must have mdraid running and recreate your initramdisk - so dracut knows about mdraid.

Comment 22 Franck C. 2013-03-09 13:20:13 UTC
mdadm has no partitions set as I don't use software raid but raid1 from intel ICH, adaptec and sil24

as Programer researcher, I have absolutely no idea of how to remove dmraid and use mdadm for so called fakeraid.

I only expected to update my kernels on all my nodes as usual, but everything failed at this time.

it would be very useful if you provide any tutorial of how to set mdadm instead of dmraid and configure it for fakeraid raid1.

Comment 23 Franck C. 2013-03-09 13:21:16 UTC
the thing that is absolutely sure now is I can't reformat my disk again.

Comment 24 Heinz Mauelshagen 2013-03-11 11:27:36 UTC
(In reply to comment #18)
> maybe some kernel parameters would help ?

Have you tried the "nodmraid" kernel option yet?

Comment 25 Zdenek Kabelac 2013-03-11 11:32:30 UTC
I guess it's the same case as  Bug  916231.
DDF is not yet supported by dracut.

Comment 26 Franck C. 2013-03-11 14:30:50 UTC
> Have you tried the "nodmraid" kernel option yet
yes, but LVM partitions doesn't exist after that

> DDF is not yet supported by dracut
so how do you explain that I was able to mount it until kernel 3.6.6

what the best solution at this time ? I have 10 servers with ddf.

Thanks

Comment 27 Franck C. 2013-03-11 14:33:14 UTC
btw I can't access to Bug  916231

Comment 28 Franck C. 2013-03-11 14:42:30 UTC
maybe this patch would be relevant ?
https://bugzilla.redhat.com/show_bug.cgi?id=862085

Comment 29 Franck C. 2013-03-11 23:45:24 UTC
I just installed F18 on another server that use ASR dmraid
and now at boot I get dracut fail with:
ERROR dos partition address past end of raid device

however I'm able to mount all lvm patitions if I boot from the live CD

Comment 30 Franck C. 2013-03-12 03:39:08 UTC
ASR_ is not recognized at boot, even if I use nodmraid.
the thing I don't understand is DDF and ASR are recognized if I boot with a F18 live CD. so why not at HD boot ?

Comment 31 Franck C. 2013-03-12 14:13:14 UTC
it seems that even with nodmraid systemd starts fakeraid service. More it starts
it after lvm and udev services, which is not logical.
I think the problem is coming from systemd
do you think it can be resolved ? I have now 5 servers stucked with that.
I can give full access to my servers, if things can be easier for you

Thanks

Comment 32 Franck C. 2013-03-12 14:42:37 UTC
FYI there is absolutely no problem with Fedora 17 standard install

Comment 33 Franck C. 2013-03-15 03:59:35 UTC
Please tell me what's the temporarly solution that allows me to continue to work on my server and reboot without pain. thanks

Comment 36 Harald Hoyer 2013-03-15 14:07:31 UTC
Does it work, if you remove "rd.dm.uuid=ddf1_4c5349202020202080862682000000004711471100001450" from the kernel command line?

Comment 37 Franck C. 2013-03-15 16:07:20 UTC
it works with kernel 3.6.6 but not others recent until 3.8.2

Comment 38 Franck C. 2013-03-15 21:40:26 UTC
I found an interesting article
http://forums.gentoo.org/viewtopic-t-888520.html
where mdadm can replace dmraid. BUT,
if I follow instructions (domdadm nodmraid at kernel live cd F18-CFXE boot)
mdadm --detail-platform didn't detect any hardware raid (neither ddf1 nor asf)

Comment 39 Franck C. 2013-03-18 02:29:46 UTC
Good news.
ASR_xxx firmware raid works again with kernel 3.8.3

DDF_xxx firmware raid still fails into dracut with a need to
run dmraid -ay manually and mount -a then exit to mount everything

Comment 40 Franck C. 2013-03-18 05:40:20 UTC
built in raidhost ICH9 intel using DDF1 seems to work also with kernel 3.8.3
now I can see mdadm recognizing ddf as /dev/md127 and bridge the fakeraid
as /dev/md126

about ASR it works again but only with dmraid

Comment 41 Franck C. 2013-03-18 05:41:33 UTC
Concerning adpatec pci card (like 1420SA) with kernel 3.8.3 using DDF it still fails on dracut and needed to use dmraid -ay mount -a exit to boot correctly

Comment 42 Franck C. 2013-03-18 16:56:22 UTC
Problem with kernel 3.8.3 and intel builtin hostraid:

odd behavior of mdamd that wants to rebuild the raid1 array (/dev/md126).
but why if intel chipset doest it automatically ?

the result is when I reboot, the firmware raid array is destroyed
so I every time I have to go into the intel bios and recreate an array.

thanks

Comment 43 Franck C. 2013-03-20 03:06:04 UTC
update kernel 3.8.3-203:

[root@node142 ~]# mdadm --detail-platform
mdadm: imsm capabilities not found for controller: /sys/devices/pci0000:00/0000:00:1f.2 (type SATA)
 I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2 (SATA)

however /dev/md126 /dev/md126p1 /dev/md127 are created,

[root@node142 ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sda[1] sdb[0]
      487304192 blocks super external:/md127/0 [2/2] [UU]
      [==>..................]  resync = 10.1% (49514112/487304192) finish=99.6min speed=73234K/sec

md127 : inactive sda[1](S) sdb[0](S)
      2164784 blocks super external:ddf

unused devices: <none>


but when I reboot, raid array in bios disappeared so impossible to boot
on grub2

Comment 44 Franck C. 2013-03-20 03:08:45 UTC
[root@node142 ~]# grub2-install /dev/md/ddf0
/usr/sbin/grub2-bios-setup: error: disk `mduuid/8c33f0c1dabf54690214f6fd6f5ae451' not found.

how to correct this ?

Comment 45 Franck C. 2013-03-20 14:57:13 UTC
I tried also to compile my own kernel 3.8.3 and
at boot it fails into dracut.

Comment 46 Franck C. 2013-06-07 08:04:35 UTC
Hi,

I just update my servers with kernel 3.9.4 and now ASR_xxx dmraid drivers are working in LVM partitions as well as SIL_xxx.
But DDF1_xxx still doesn't working when boot and root partitions are in LVM.
however DDF1_xxx works when bios raid is not DDF1_xxx for boot and/or root partition and (for example) home partition is DDF1_xxx.

Comment 47 Fedora End Of Life 2013-12-21 11:28:15 UTC
This message is a reminder that Fedora 18 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 18. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '18'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 18's end of life.

Thank you for reporting this issue and we are sorry that we may not be 
able to fix it before Fedora 18 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior to Fedora 18's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 48 Fedora End Of Life 2014-02-05 19:13:49 UTC
Fedora 18 changed to end-of-life (EOL) status on 2014-01-14. Fedora 18 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.