Red Hat Bugzilla – Bug 470540
SATA RAID1 not detected during installation (ESB2,LSI firmware)
Last modified: 2009-06-20 04:04:17 EDT
Description of problem:
If you have a SATA RAID1 on Intel Hardware with ESB2 and LSI firmware
the RAID disk is not detected
Error device /dev/mapper/ddf1_..... not found.
Hardware used in this test FSC rx200-s4
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Configure RAID1 in the SATA RAID controller ( ESB2 with LSI firmware )
2. Start installation
After entering the root password /dev/mapper/ddf1_.....
No problem with RAID0 on same system
No problem with fedora 9 installation on same system and configuration
fedora 10 installation does not work, instead of RAID1 2 disks
sda and sdb are seen.
The statement RAID0 has no problem, is not really true.
After first boot the system comes up with /dev/sda instead of
This might have a similar reason as in incident 460996
Please append, what "dmraid -b ; dmraid -r ; dmraid -s ; dmraid -tay" shows in the failure cases, thanks.
Created attachment 323456 [details]
requested output from dmraid
The attachment to comment #3 shows dmraid detecting the mirror
Please analyze any anaconda/python-pyblock flaw.
1. Did you execute these commands (from comment #3) in a running system, or in the tty2 of the anaconda installer?
2. Have you tested with snap #3?
I have a isw RAID1 on my machine that did not want to detect before snap3. When I made sure that I was installing from snap3 I was able to see the mirror. I don't think that there is actually anything wrong with anaconda/pyblock as the changes that went into dmraid (AFAIKS) did not leak to the API.
What I suggest is for you to test with snap 3 and if you still get the same behavior, run the commands that Heinz described in tty2 and post your findings:). Also run `dmraid -ay`, thats the test that I used, but I assume that whatever Heinz posted is good enough :)
FYI: tty2 is the second virtual terminal ALT-F2. There should be a shell there.
Thank you Joel -
You could do me a favor - could you try dmraid + GPT
and tell me your experience .
Have a nice weekend,
I created a gpt partition table on the raid set. Rebooted and tried an install. Anaconda correctly detected the dmraid and gave me the option to use it for installation.
As already suggested by Joel, please retest this with snapshot3, we believe this is fixed in snapshot3.
Winfrid, did Snapshot 3 fix the RAID detection problem for you? Your Comment #10 in 470543 implies that it does, but I don't want to close this out until I hear from you.
You are right, that part1 of the installation works with rhel 5.3 -
but what does it help, when afterwards the firstboot sees 2 seperate disks instead of a RAID1 ?? I think we only can close this incident when also 460996 is fixed.
(In reply to comment #10)
> Hello Denise,
> You are right, that part1 of the installation works with rhel 5.3 -
> but what does it help, when afterwards the firstboot sees 2 seperate disks
> instead of a RAID1 ?? I think we only can close this incident when also 460996
> is fixed.
We've managed to track down the cause of the system seeing 2 separate disks after installation to mkinitrd (nash). We've done a new build of mkinitrd / nash: 184.108.40.206-41, which we believe fixes this (it does on our test systems).
The new nash-220.127.116.11-41, will be in RHEL 5.3 snapshot 5 which should become available for testing next Monday.
Please test this with snapshot5 when available and let us know how it goes. Thanks for your patience.
Winfrid, you should not still be seeing this problem as of RHEL 5.3 Snapshot 5. We believe the change described in Comment 11 fixed it, at least it did for the other 4 reports of this problem verified by partners, and in our own testing.
I do not believe the fix has made it upstream into Fedora yet, so your BZ 460996 is still open. We are focused on 5.3 at the moment, and will get to Fedora once 5.3 is stabilized.
To reduce BZ confusion I am closing this as a dup of 471689. If you experience the same problem again, please feel free to reopen the BZ. If you encounter a different problem, I'd appreciate it if you could open a new BZ to make it clear that it is a new problem.
*** This bug has been marked as a duplicate of bug 471689 ***
I had not time to do a new installation, but I have updated a system
At least with this kernel I do not see any RAID.
Booting the system with 92.0.13 ( actual kernel of U2 ) I see
/dev/mapper/pdc_....... but the strange thing is that the mounted partitions
show device /dev/sda
Is there any way to get my RAID configuration repaired,
or should I forget this environment
Winfrid, I'm afraid that once the system has been booted with a broken initrd once, the raid set is busted, you see will need to re-install, please re-open if a misinstall (after recreating the raid set in the BIOS) fails.
*** This bug has been marked as a duplicate of bug 471689 ***
Winfrid, typing seems difficult to day, instead of misinstall I meant re-install of course.
(In reply to comment #16)
> Winfrid, I'm afraid that once the system has been booted with a broken initrd
> once, the raid set is busted, you see will need to re-install, please re-open
> if a misinstall (after recreating the raid set in the BIOS) fails.
> *** This bug has been marked as a duplicate of 471689 ***
My reinstallion on a RAID0 was successful, there is just one cosmetic problem,
which in fedora 10 [Bug 470628] is already solved, I get during boot the following message:
Boot message: Could not detect stabilization, waiting 10 seconds.
Just a remark, I had a similar problem on fedora 8 ( seeing /dev/sda instead of the mapped device ), here the problem was gone with a new kernel
(In reply to comment #18)
> Hello Hans,
> My reinstallion on a RAID0 was successful, there is just one cosmetic problem,
> which in fedora 10 [Bug 470628] is already solved, I get during boot the
> following message:
> Boot message: Could not detect stabilization, waiting 10 seconds.
> Just a remark, I had a similar problem on fedora 8 ( seeing /dev/sda instead of
> the mapped device ), here the problem was gone with a new kernel
Unfortunately this 10 second boot delay is caused by a bugfix for systems where the ramdisk would not wait long enough for disk scanning to complete.
The problem is that in some cases, where the system is done scanning before we start waiting, we do not detect stabilization of the found disks (because all were already found) and thus wait 10 seconds to be sure.
Due to kernel differences we cannot use the Fedora fix in RHEL-5. We've prepared a release note to explain the cause of the 10 second delay.
Just for your info :
I succeeded in getting back my dmraid on my updated rhel5 systems.
The good thing was, I still had a kernel where my dmraid basically worked -
booting my system with vmlinuz-2.6.18-8.1.15.el5 and having
sda replaced with mapper/pdc_bjfeeibeebp in
/etc/mtab and /etc/blkid/blkid.tab my dmraid config looked good.
After this I installed the kernel from RHEL5.3-Snap6 and updated the
the system with rpm -Fvh .
All updates worked fine, except a warning from dmraid :
[root@rx220a Server]# rpm --force -Uvh dmraid-1.0.0.rc13-19.el5.x86_64.rpm
warning: dmraid-1.0.0.rc13-19.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 897da07a
Preparing... ########################################### [100%]
1:dmraid ########################################### [100%]
error: %post(dmraid-1.0.0.rc13-19.el5.x86_64) scriptlet failed, exit status 1
Something I should care about ??
PS: I wish you a wonderful Xmas time
tomorrow it is my last working day in 2008 - being back on Jan. 7, 2009