Description of problem: Please see also #471689 - the same problem was rhel5.3 Snapshot. In case of an installed rhel5 and an upgrade to rhel5.4 beta the current kernels 2.6.18-155.el5 and 2.6.18-156.el5 fail to activate dmraid. Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. Install rhel 5.4 on a fake raid (mdraid) or upgrade to 5.4 beta 2. 3. Actual results: Installation fails - no disks found or the system comes up with /dev/sda... instead of /dev/mapper/....... Expected results: Working RAID configuration Additional info: dmraid output from rhel5.4 beta (upgraded system): dmraid -b /dev/sda: 312581808 total, "4MT0G97A" /dev/sdb: 312581808 total, "4MT0GC4Q" [root@rx220a ~]# dmraid -s *** Set name : pdc_bjfeeibeeb size : 312368896 stride : 256 type : stripe status : ok subsets: 0 devs : 1 spares : 0 *** Set name : pdc_cbbgheicjb size : 312368896 stride : 128 type : stripe status : ok subsets: 0 devs : 1 spares : 0 dmraid -s -c -c -c pdc_bjfeeibeeb:312368896:256:stripe:ok:0:1:0 /dev/sda:pdc:pdc_bjfeeibeeb:stripe:ok:312368928:0 pdc_cbbgheicjb:312368896:128:stripe:ok:0:1:0 /dev/sdb:pdc:pdc_cbbgheicjb:stripe:ok:312368928:0 [root@rx220a ~]# dmraid -r /dev/sda: pdc, "pdc_bjfeeibeeb", stripe, ok, 312368928 sectors, data@ 0 /dev/sdb: pdc, "pdc_cbbgheicjb", stripe, ok, 312368928 sectors, data@ 0 [root@rx220a ~]# dmraid -tay pdc_bjfeeibeeb: 0 312368896 striped /dev/sda 0 pdc_cbbgheicjb: 0 312368896 striped /dev/sdb 0 [root@rx220a ~]# ls /dev/mapper control [root@rx220a ~]# dmraid -ay RAID set "pdc_bjfeeibeeb" was not activated RAID set "pdc_cbbgheicjb" was not activated
Same problem exists on new released kernel-2.6.18-128.1.16.el5
Same problem exists on new released kernel-2.6.18-157.el5
Same problem exists on new released kernel-2.6.18-128.2.1.el5 uname -a; dmraid -ay -v -v -v Linux rx220a 2.6.18-128.2.1.el5 #1 SMP Wed Jul 8 11:54:47 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux WARN: locking /var/lock/dmraid/.lock NOTICE: skipping removable device /dev/hda NOTICE: /dev/sda: asr discovering NOTICE: /dev/sda: ddf1 discovering NOTICE: /dev/sda: hpt37x discovering NOTICE: /dev/sda: hpt45x discovering NOTICE: /dev/sda: isw discovering NOTICE: /dev/sda: jmicron discovering NOTICE: /dev/sda: lsi discovering NOTICE: /dev/sda: nvidia discovering NOTICE: /dev/sda: pdc discovering NOTICE: /dev/sda: pdc metadata discovered NOTICE: /dev/sda: sil discovering NOTICE: /dev/sda: via discovering NOTICE: /dev/sdb: asr discovering NOTICE: /dev/sdb: ddf1 discovering NOTICE: /dev/sdb: hpt37x discovering NOTICE: /dev/sdb: hpt45x discovering NOTICE: /dev/sdb: isw discovering NOTICE: /dev/sdb: jmicron discovering NOTICE: /dev/sdb: lsi discovering NOTICE: /dev/sdb: nvidia discovering NOTICE: /dev/sdb: pdc discovering NOTICE: /dev/sdb: pdc metadata discovered NOTICE: /dev/sdb: sil discovering NOTICE: /dev/sdb: via discovering NOTICE: added /dev/sda to RAID set "pdc_bjfeeibeeb" NOTICE: added /dev/sdb to RAID set "pdc_cbbgheicjb" RAID set "pdc_bjfeeibeeb" was not activated RAID set "pdc_cbbgheicjb" was not activated WARN: unlocking /var/lock/dmraid/.lock
you can add also rhel6-alpha-1 to the linux distributions in trouble : I verified the problem on a Fujitsu Primergy rx200 s4 - in case of rhel6 the installation on a RAID0 ( 1 disk ) works, but first boot fails, fsck for "/" is performed, but device was /dev/sda5 instead of /dev/mapper/ddf1_.......p5 fsck for /boot fails because device was not found. dmraid commands have an empty output. I suppose the reason is similar to #505562.
Same problem exists on new released kernel-2.6.18-159.el5
Same problem exists on new released kernel-2.6.18-160.el5
Just a remark - once a partition has been mounted without an activated dmraid, it will mount even with an activated dmraid to /dev/sda# unless you fix /etc/blkid/blkid.tab and /etc/mtab
Has dmraid ever worked on this hardware with a RHEL5 release? Also, please clarify: Seems you're having problems installing? And you're also having problems on an system upgraded to 5.4 beta? Is that to say that RHEL5 U3 (before the upgrade) worked properly for you? If so which kernel? And which version of the dmraid package? comment#3 seems to indicate that a RHEL5 U3 Z-stream kernel also doesn't work.
Hi Mike, To answer your questions, yes demraid works on this hardware (Promise FastTrak S150 TX4) until kernel-2.6.18-128.1.14.el5. Newer kernels of U3 show the same problems as kernels of U4 The installation problem is an installation problem as long as you should be able to work with an activated dmraid, which to my knowledge was not possible with any of the U4 kernels until Snap5. Please see also #505562 ( dmraid with ichX / ESB2 and LSI firmware ), which does not work on any RedHat or Novell linux distribution.
Winfried, that's there's 2 different RAID sets (pdc_bjfeeibeeb and pdc_cbbgheicjb) of type striped on 2 devices strikes me. I.e. RAID sets of type striped have to have 2 devices each, hence 4 devices total for the 2 RAID sets displayed while only 2 (sda+sdb) are being discovered (see comment #3). Wouldn't you expect just *one* striped RAID set with the 2 devices in to be activated and installed to? Please check what the Promise BIOS displays for the RAID configuration. WRT to your comment #9: has the RAID config changed after you tested with kernel- 2.6.18-128.1.14.el5?
Hello Heinz, All has to do with the Promise FastTrak S150 TX4, to boot a system with this controller, the drive has to be defined in a RAID. Even with 1 disk, this disk must be a RAID0 with just 1 disk. So you can forget about /dev/sdb - to demonstrate that everything is correct here the out from my second rx220, running fedora 10 : uname -a Linux rx220b 2.6.27.25-170.2.72.fc10.x86_64 #1 SMP Sun Jun 21 18:39:34 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux [root@rx220b ~]# dmraid -b /dev/sda: 156301488 total, "3MR01M1G" [root@rx220b ~]# dmraid -r /dev/sda: pdc, "pdc_cjgdhdfafh", stripe, ok, 156118928 sectors, data@ 0 [root@rx220b ~]# dmraid -s -c -c -c pdc_cjgdhdfafh:156118912:128:stripe:ok:0:1:0 /dev/sda:pdc:pdc_cjgdhdfafh:stripe:ok:156118928:0 [root@rx220b ~]# ls /dev/mapper control pdc_cjgdhdfafhp2 pdc_cjgdhdfafhp5 pdc_cjgdhdfafhp8 pdc_cjgdhdfafh pdc_cjgdhdfafhp3 pdc_cjgdhdfafhp6 pdc_cjgdhdfafhp1 pdc_cjgdhdfafhp4 pdc_cjgdhdfafhp7 [root@rx220b ~]# more /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: ST380013AS Rev: 3.00 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVD-ROM GDR8082N Rev: 0B11 Type: CD-ROM ANSI SCSI revision: 05 I think you call this Type of RAID0 with 1 disk linear, but in case of Promise it is called RAID0. The configuration for the test with kernel-2.6.18-128.1.14.el5 kernel-2.6.18-128.2.1.el5 kernel-2.6.18-160.el5 is identical. My system is on rhel5.4 snap5, I just boot different kernels. Best regards, Winfrid
Hi Winfried, I understand. Please attach the "dmraid -rD" metadata dump here in a bzip2'ed tarball for further analysis. Using the 5.4 version for this is fine. Thanks, Heinz
Created attachment 356025 [details] requested tbz for dmraid config [root@rx220a ~]# dmraid -rD ERROR: ddf1: both header signatures bad on /dev/sdb /dev/sda: pdc, "pdc_bjfeeibeeb", stripe, ok, 312368928 sectors, data@ 0 /dev/sdb: pdc, "pdc_djgfieghjb", stripe, ok, 312368928 sectors, data@ 0 uname -a Linux rx220a 2.6.18-128.1.14.el5 #1 SMP Mon Jun 1 15:52:58 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Winfrid, the .dat files are empty. You either need to try the older dmraid version or dd the meradata off the devices. Thanks, Heinz (In reply to comment #13) > Created an attachment (id=356025) [details] > requested tbz for dmraid config > > [root@rx220a ~]# dmraid -rD > ERROR: ddf1: both header signatures bad on /dev/sdb > /dev/sda: pdc, "pdc_bjfeeibeeb", stripe, ok, 312368928 sectors, data@ 0 > /dev/sdb: pdc, "pdc_djgfieghjb", stripe, ok, 312368928 sectors, data@ 0 > > uname -a > Linux rx220a 2.6.18-128.1.14.el5 #1 SMP Mon Jun 1 15:52:58 EDT 2009 x86_64 > x86_64 x86_64 GNU/Linux
Hello Heinz, I just tried rhel 4.8 with dmraid-1.0.0.rc14-9_RHEL4_U7.el4# and also this one reports : -rw------- 1 root root 0 Aug 4 08:55 sda_pdc.dat -rw------- 1 root root 2 Aug 4 08:55 sda_pdc.offset -rw------- 1 root root 10 Aug 4 08:55 sda_pdc.size -rw------- 1 root root 0 Aug 4 08:55 sdb_pdc.dat -rw------- 1 root root 2 Aug 4 08:55 sdb_pdc.offset -rw------- 1 root root 10 Aug 4 08:55 sdb_pdc.size And I got the same for my backup rx220 with fedora 10 + dmraid-1.0.0.rc15-2.fc10.x86_64 ls -l dmraid.pdc total 8 -rw------- 1 root root 0 2009-08-04 09:01 sda.dat -rw------- 1 root root 2 2009-08-04 09:01 sda.offset -rw------- 1 root root 10 2009-08-04 09:01 sda.size And SLES10 SP3 (Beta3) with dmraid-0.99_1.0.0rc13-0.8 : -rw------- 1 root root 0 Aug 4 09:05 sda_pdc.dat -rw------- 1 root root 2 Aug 4 09:05 sda_pdc.offset -rw------- 1 root root 10 Aug 4 09:05 sda_pdc.size -rw------- 1 root root 41926656 Feb 19 14:16 sdb_ddf1.dat -rw------- 1 root root 13 Feb 19 14:16 sdb_ddf1.offset -rw------- 1 root root 10 Feb 19 14:16 sdb_ddf1.size -rw------- 1 root root 0 Aug 4 09:05 sdb_pdc.dat -rw------- 1 root root 2 Aug 4 09:05 sdb_pdc.offset -rw------- 1 root root 10 Aug 4 09:05 sdb_pdc.size but sdb_ddf1.* we can forget - it was not completely erased from a previous installation on a system with ESB2, just a question can I get rid off the ddf1 raid info without destroying the rest of information on the disk. And how can I provide non-empty dat files with dd ? ( Also I am wondering if linear disks need any information in the dat file ?) Winfrid
(In reply to comment #15) > Hello Heinz, > > I just tried rhel 4.8 with dmraid-1.0.0.rc14-9_RHEL4_U7.el4# > and also this one reports : > > -rw------- 1 root root 0 Aug 4 08:55 sda_pdc.dat > -rw------- 1 root root 2 Aug 4 08:55 sda_pdc.offset > -rw------- 1 root root 10 Aug 4 08:55 sda_pdc.size > -rw------- 1 root root 0 Aug 4 08:55 sdb_pdc.dat > -rw------- 1 root root 2 Aug 4 08:55 sdb_pdc.offset > -rw------- 1 root root 10 Aug 4 08:55 sdb_pdc.size > > And I got the same for my backup rx220 with fedora 10 + > dmraid-1.0.0.rc15-2.fc10.x86_64 Hrm, that's odd (read: a bug). > > ls -l dmraid.pdc > total 8 > -rw------- 1 root root 0 2009-08-04 09:01 sda.dat > -rw------- 1 root root 2 2009-08-04 09:01 sda.offset > -rw------- 1 root root 10 2009-08-04 09:01 sda.size > > And SLES10 SP3 (Beta3) with dmraid-0.99_1.0.0rc13-0.8 : > -rw------- 1 root root 0 Aug 4 09:05 sda_pdc.dat > -rw------- 1 root root 2 Aug 4 09:05 sda_pdc.offset > -rw------- 1 root root 10 Aug 4 09:05 sda_pdc.size > -rw------- 1 root root 41926656 Feb 19 14:16 sdb_ddf1.dat > -rw------- 1 root root 13 Feb 19 14:16 sdb_ddf1.offset > -rw------- 1 root root 10 Feb 19 14:16 sdb_ddf1.size > -rw------- 1 root root 0 Aug 4 09:05 sdb_pdc.dat > -rw------- 1 root root 2 Aug 4 09:05 sdb_pdc.offset > -rw------- 1 root root 10 Aug 4 09:05 sdb_pdc.size > > but sdb_ddf1.* we can forget - it was not completely erased from a > previous installation on a system with ESB2, just a question can I get > rid off the ddf1 raid info without destroying the rest of information on the > disk. > > And how can I provide non-empty dat files with dd ? > ( Also I am wondering if linear disks need any information in the dat file ?) Yes, there's always metadata for any type of mapping. Maybe we got a strange offset which isn't supported properly. You need to search for the string "Promise Technology" starting at sector offset (i.e. 512 byte units) 2048 off the end of each device (sda+sdb). Copy the absolute offset from the beginning you found for that signature to the .offset files and 2KB from that offset to the respective .dat file. Thanks, Heinz > > Winfrid
Created attachment 356169 [details] last 32k of /dev/sda blockdev --getsz /dev/sda 312581808 Hope this helps
Created attachment 356170 [details] last 32k of /dev/sdb blockdev --getsz /dev/sdb 312581808 blockdev --getsz /dev/mapper/pdc_djgfieghjb 312368896
Created attachment 356199 [details] Test binaray for pdc single disk RAID0 type RAID sets Winfrid, this x86_64 static binary has a detection for pdc single disks RAID0 sets in and changes the mapping to linear. Please test with "./dmraid_bz509962.static -ay" and see if it creates a linear mapping for your sda+sdb with correct access to the RAID sets data.
rhel 5.4 snap5 with kernel-2.6.18-128.1.14.el5 : /tmp/./dmraid_bz509962.static -ay ERROR: ddf1: both header signatures bad on /dev/sdb RAID set "pdc_bjfeeibeeb" already active RAID set "pdc_djgfieghjb" was activated RAID set "pdc_bjfeeibeebp1" already active RAID set "pdc_bjfeeibeebp2" already active RAID set "pdc_bjfeeibeebp3" already active RAID set "pdc_bjfeeibeebp5" already active RAID set "pdc_bjfeeibeebp6" already active RAID set "pdc_bjfeeibeebp7" already active RAID set "pdc_bjfeeibeebp8" already active RAID set "pdc_bjfeeibeebp9" already active RAID set "pdc_bjfeeibeebp10" already active RAID set "pdc_bjfeeibeebp11" already active RAID set "pdc_djgfieghjbp1" was activated RAID set "pdc_djgfieghjbp2" was activated RAID set "pdc_djgfieghjbp5" was activated RAID set "pdc_djgfieghjbp6" was activated RAID set "pdc_djgfieghjbp7" was activated RAID set "pdc_djgfieghjbp8" was activated same with the latest kernel : uname -a Linux rx220a 2.6.18-160.el5 #1 SMP Mon Jul 27 17:28:29 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux /tmp/dmraid_bz509962.static -ay ERROR: ddf1: both header signatures bad on /dev/sdb RAID set "pdc_bjfeeibeeb" was not activated RAID set "pdc_djgfieghjb" was activated RAID set "pdc_djgfieghjbp1" was activated RAID set "pdc_djgfieghjbp2" was activated RAID set "pdc_djgfieghjbp5" was activated RAID set "pdc_djgfieghjbp6" was activated RAID set "pdc_djgfieghjbp7" was activated RAID set "pdc_djgfieghjbp8" was activated I suppose that RAID on/dev/sda is not actived, because "/" (/dev/sda2 ) is already mounted. dmraid -rD still does not dump correctly (*.dat are empty): /tmp/dmraid_bz509962.static -rD ERROR: ddf1: both header signatures bad on /dev/sdb /dev/sda: pdc, "pdc_bjfeeibeeb", linear, ok, 312581745 sectors, data@ 0 /dev/sdb: pdc, "pdc_djgfieghjb", linear, ok, 312581745 sectors, data@ 0 ls -l dmraid_bz509962.static.pdc totaltotal 16 -rw------- 1 root root 0 Aug 5 09:23 sda.dat -rw------- 1 root root 2 Aug 5 09:23 sda.offset -rw------- 1 root root 10 Aug 5 09:23 sda.size -rw------- 1 root root 0 Aug 5 09:23 sdb.dat -rw------- 1 root root 2 Aug 5 09:23 sdb.offset -rw------- 1 root root 10 Aug 5 09:23 sdb.size Does your new "dmraid" module also cover any RAID on ESB2 with LSI firmware - I am asking this, because on my rx200s4 my SATA RAID cannot be activated ( #505562 ).
Good, this fix seems to have tackled the mapping issue for pdc. You didn't mention if data access is ok on the pdc_djgfieghjb partitions ? Can you try running the sent static binary on an inactive sda (i.e. no partitions mounted) to make sure it activates it ok ? lsi format is a different issue: can you provide the metadata for that as well attached to a new bz ?
Sorry #22 is the same as #20 - I refreshed my IE, so the old data were transmitted again. In case of the old kernel ( 2.6.18-128.1.14.el5 ) almost nothing has changed, I can access to /dev/mapper/pdc_...... on both disks. Only after booting this kernel ( 2.6.18-128.1.14.el5 ) after e.g. 2.6.18-160.el5, where my partitions where incorrect mounted to /dev/sda# I have to do some precautions that the partitions will be mounted on /dev/mapper/pdc_pdc_bjfeeibeebp# and not to /dev/sda# I have to correct /etc/blkid/blkid.tab, e.g.: <device DEVNO="0xfd0a" TIME="1245676907" UUID="65a66dd7-43a6-4038-ad00-0d0b5dcbed6c" SEC_TYPE="ext2" TYPE="ext3">/dev/mapper/pdc_bjfeeibeebp11</device> <device DEVNO="0xfd09" TIME="1245676907" UUID="8a81d510-3796-45e3-80bf-7f08148803bf" SEC_TYPE="ext2" TYPE="ext3">/dev/mapper/pdc_bjfeeibeebp10</device> <device DEVNO="0xfd08" TIME="1245676907" UUID="cf79aaec-d9f0-4070-84cf-928aa65e2e70" SEC_TYPE="ext2" TYPE="ext3">/dev/mapper/pdc_bjfeeibeebp9</device> <device DEVNO="0xfd06" TIME="1246889161" UUID="2c299cca-08e8-4939-9a4b-f711a0164c8d" SEC_TYPE="ext2" TYPE="ext3">/dev/mapper/pdc_bjfeeibeebp7</device> <device DEVNO="0xfd05" TIME="1245676907" UUID="117d3d20-8645-49ab-a199-a076f83b2bc3" SEC_TYPE="ext2" TYPE="ext3" LABEL="/12">/dev/mapper/pdc_bjfeeibeebp6</device> <device DEVNO="0xfd04" TIME="1245676907" UUID="4b0fb8c5-e253-41b9-bd22-b66387996666" SEC_TYPE="ext2" TYPE="ext3">/dev/mapper/pdc_bjfeeibeebp5</device> <device DEVNO="0xfd07" TIME="1245676907" UUID="29245156-832a-4314-8732-b35edcb83f36" SEC_TYPE="ext2" TYPE="ext3" LABEL="fedora8">/dev/mapper/pdc_bjfeeibeebp8</device> <device DEVNO="0xfd01" TIME="1246889567" LABEL="/boot" UUID="bd7535c7-3996-4237-951b-ff290a29d4a9" SEC_TYPE="ext2" TYPE="ext3">/dev/mapper/pdc_bjfeeibeebp1</device> <device DEVNO="0xfd03" TIME="1246889551" PRI="40" TYPE="swap" LABEL="SWAP-pdc_bjfeei">/dev/mapper/pdc_bjfeeibeebp3</device> <device DEVNO="0xfd02" TIME="1246889551" PRI="40" LABEL="/1" UUID="5412a7a8-13f8-470d-9aca-111bc2913bcf" SEC_TYPE="ext2" TYPE="ext3">/dev/mapper/pdc_bjfeeibeebp2</device> and /etc/mtab , e.g.: /dev/mapper/pdc_bjfeeibeebp2 / ext3 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 devpts /dev/pts devpts rw,gid=5,mode=620 0 0 /dev/mapper/pdc_bjfeeibeebp1 /boot-loader ext3 rw 0 0 tmpfs /dev/shm tmpfs rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0 /dev/mapper/pdc_djgfieghjbp1 /mnt2 ext3 rw 0 0 Prior to using the "new" dmraid module, it was enough to change /etc/blkid/blkid.tab and remove /etc/mtab . If I remove now /etc/mtab my root partition is mounted to /dev/sda2 - /boot-loader will be mounted on /dev/mapper/pdc_bjfeebp1 in case of kernel 2.6.18-128.1.16.el5 or higher only the´RAID on /dev/sdb is activated and there is no problem to access on /dev/sdb the partitions with /dev/mapper/pdc_djgfieghjbp#
Created attachment 356304 [details] Static dmraid test binary with pdc zero .dat file issue Winfrid, I have to discuss the mtab/blkid.tab issues with colleagues. For the time being, I've attached another static binary for you to test which fixes the zero .dat file issue (and the .offset file one as well) on dumping metadata with "-rD". Please try.
I just tested with kernel 2.6.18-128.1.14.el5, and it looks like you have fixed the Dump problem, the size of *.dat is now 2kb. ibeebp2" already active RAID set "pdc_bjfeeibeebp3" already active RAID set "pdc_bjfeeibeebp5" already active RAID set "pdc_bjfeeibeebp6" already active RAID set "pdc_bjfeeibeebp7" already active RAID set "pdc_bjfeeibeebp8" already active RAID set "pdc_bjfeeibeebp9" already active RAID set "pdc_bjfeeibeebp10" already active RAID set "pdc_bjfeeibeebp11" already active RAID set "pdc_djgfieghjbp1" was activated RAID set "pdc_djgfieghjbp2" was activated RAID set "pdc_djgfieghjbp5" was activated RAID set "pdc_djgfieghjbp6" was activated RAID set "pdc_djgfieghjbp7" was activated RAID set "pdc_djgfieghjbp8" was activated [root@rx220a tmp]# ./dmraid_bz509962a.static -rD ERROR: ddf1: both header signatures bad on /dev/sdb /dev/sda: pdc, "pdc_bjfeeibeeb", linear, ok, 312581745 sectors, data@ 0 /dev/sdb: pdc, "pdc_djgfieghjb", linear, ok, 312581745 sectors, data@ 0 [root@rx220a tmp]# ls -la dmraid_bz509962a.static.pdc/ total 36 drwxr-xr-x 2 root root 4096 Aug 5 14:21 . drwxr-xr-x 8 root root 4096 Aug 5 14:21 .. -rw------- 1 root root 2048 Aug 5 14:21 sda.dat -rw------- 1 root root 13 Aug 5 14:21 sda.offset -rw------- 1 root root 10 Aug 5 14:21 sda.size -rw------- 1 root root 2048 Aug 5 14:21 sdb.dat -rw------- 1 root root 13 Aug 5 14:21 sdb.offset -rw------- 1 root root 10 Aug 5 14:21 sdb.size [root@rx220a tmp]# cat dmraid_bz509962a.static.pdc/sda.dat Promise Technology, Inc.!~q !t ÖÕÔÓÒÑÐÏÎÍÌËÊÉÈÇÆÅÄÃÂÁÀ¿¾½¼»º¹¸·¶µ´³²±°¯®¬«ª©¨§¦¥¤£¢¡ ~}|{zyxwvutsrqponmlkjihgfedcba`_^]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"! And now the same with kernel 2.6.18-160.el5 : [root@rx220a ~]# /tmp/dmraid_bz509962a.static -ay ERROR: ddf1: both header signatures bad on /dev/sdb RAID set "pdc_bjfeeibeeb" was not activated RAID set "pdc_djgfieghjb" was activated RAID set "pdc_djgfieghjbp1" was activated RAID set "pdc_djgfieghjbp2" was activated RAID set "pdc_djgfieghjbp5" was activated RAID set "pdc_djgfieghjbp6" was activated RAID set "pdc_djgfieghjbp7" was activated RAID set "pdc_djgfieghjbp8" was activated [root@rx220a tmp]# /tmp/dmraid_bz509962a.static -rD ERROR: ddf1: both header signatures bad on /dev/sdb /dev/sda: pdc, "pdc_bjfeeibeeb", linear, ok, 312581745 sectors, data@ 0 /dev/sdb: pdc, "pdc_djgfieghjb", linear, ok, 312581745 sectors, data@ 0 [root@rx220a tmp]# ls -la dmraid_bz509962a.static.pdc total 36 drwxr-xr-x 2 root root 4096 Aug 5 14:32 . drwxr-xr-x 8 root root 4096 Aug 5 14:32 .. -rw------- 1 root root 2048 Aug 5 14:32 sda.dat -rw------- 1 root root 13 Aug 5 14:32 sda.offset -rw------- 1 root root 10 Aug 5 14:32 sda.size -rw------- 1 root root 2048 Aug 5 14:32 sdb.dat -rw------- 1 root root 13 Aug 5 14:32 sdb.offset -rw------- 1 root root 10 Aug 5 14:32 sdb.size [root@rx220a tmp]# tar cjvf dmraid_bz509962a.static.pdc-2.6.18-160.el5.tbz dmraid_bz509962a.static.pdc dmraid_bz509962a.static.pdc/ dmraid_bz509962a.static.pdc/sda.dat dmraid_bz509962a.static.pdc/sdb.dat dmraid_bz509962a.static.pdc/sda.offset dmraid_bz509962a.static.pdc/sdb.size dmraid_bz509962a.static.pdc/sda.size dmraid_bz509962a.static.pdc/sdb.offset Please have also a look at 505562 which I will update for ddf1 on rhel6.0-alpha
Created attachment 356307 [details] dmraid DUMP (kernel 2.6.18-128.1.14.el5 )
Created attachment 356308 [details] dmraid DUMP for kernel 2.6.18-160.el5
Created attachment 358415 [details] rhel 5.4 rc2 fails to install on dmraid - /tmp/mapper/pdc_.... not found installation fails before partitions can be assigned : error opening /tmp/mapper/pdc_....... No such device or address Output from several dmraid commands after entering dmraid -ay ls does not report any dmraid. After rhel 5.4 rc2 I tried rhel 6 alpha, which could be installed without any problems !
Problem still exists on the following kernels 2.6.18-128.7.1.EL5 2.6.18-164.EL5 There is also another problem - after switching back from kernel 2.6.18-128.7.1.EL5 to 2.6.18-128.1.14.EL5 the system tried to a filecheck on "/" (root), which did not work, because the resource was busy - the filecheck should be done on /dev/sda2 instead of /dev/mapper/pdc_..........p2 .
In the meantime I have installed fedora 12 successfully on the same HW : there was only 1 major problem (I had to activate dmraid manually during the installation) - but there is still the problem that the *.dat are empty : [root@rx220a dmraid.pdc]# ls -l total 16 -rw-------. 1 root root 0 2009-08-28 11:29 sda.dat -rw-------. 1 root root 2 2009-08-28 11:29 sda.offset -rw-------. 1 root root 10 2009-08-28 11:29 sda.size -rw-------. 1 root root 0 2009-08-28 11:29 sdb.dat -rw-------. 1 root root 2 2009-08-28 11:29 sdb.offset -rw-------. 1 root root 10 2009-08-28 11:29 sdb.size
Hi, I think I know now why the kernel-2.6.18-128.1.14 is okay and all later kernels are not, please have a look at the output of lsinitrd Eg. initrd of kernel-2.6.18-164 does not contain : > dmraid -ay -i -p "pdc_bjfeeibeeb" > kpartx -a -p p "/dev/mapper/pdc_bjfeeibeeb" > resume /dev/mapper/pdc_bjfeeibeebp3 Winfrid PS.: I do not know how to check the initrd of rhel5.4 install-kernel - I tried to copy isolinux/initrd.img to disk and run lsinitrd, but this gives me [root@rx220a ~]# lsinitrd /tmp/initrd.img /tmp/initrd.img: ======================================================================== drwxrwxr-x 11 root root 0 Aug 19 09:12 . drwxrwxr-x 2 root root 0 Aug 19 09:12 modules -rw-r--r-- 1 root root 128769 Aug 19 09:12 modules/modules.alias -rw-r--r-- 1 root root 5819673 Aug 19 09:12 modules/modules.cgz -rw-r--r-- 1 root root 21449 Aug 19 09:12 modules/modules.dep -rw-r--r-- 1 root root 72214 Aug 19 09:12 modules/pci.ids -rw-r--r-- 1 root root 6574 Aug 19 09:12 modules/module-info -rw-rw-r-- 1 root root 123 Aug 19 09:12 .profile drwxrwxr-x 6 root root 0 Aug 19 09:12 var drwxrwxr-x 3 root root 0 Aug 19 09:12 var/lock drwxrwxr-x 2 root root 0 Aug 19 09:12 var/lock/rpm drwxrwxr-x 2 root root 0 Aug 19 09:12 var/run drwxrwxr-x 2 root root 0 Aug 19 09:12 var/state drwxrwxr-x 2 root root 0 Aug 19 09:12 var/lib lrwxrwxrwx 1 root root 9 Aug 19 09:12 var/lib/xkb -> ../../tmp drwxrwxr-x 2 root root 0 Aug 19 09:12 sys drwxrwxr-x 2 root root 0 Aug 19 09:12 selinux -rw-rw-r-- 1 root root 90 Aug 19 09:12 .buildstamp drwxrwxr-x 2 root root 0 Aug 19 09:12 sbin lrwxrwxrwx 1 root root 11 Aug 19 09:12 sbin/sh -> /usr/bin/sh lrwxrwxrwx 1 root root 6 Aug 19 09:12 sbin/insmod -> loader lrwxrwxrwx 1 root root 6 Aug 19 09:12 sbin/halt -> ./init lrwxrwxrwx 1 root root 6 Aug 19 09:12 sbin/rmmod -> loader lrwxrwxrwx 1 root root 6 Aug 19 09:12 sbin/modprobe -> loader -rwxr-xr-x 1 root root 559744 Aug 19 09:12 sbin/init -rwxr-xr-x 1 root root 2377976 Aug 19 09:12 sbin/loader lrwxrwxrwx 1 root root 6 Aug 19 09:12 sbin/reboot -> ./init lrwxrwxrwx 1 root root 6 Aug 19 09:12 sbin/poweroff -> ./init lrwxrwxrwx 1 root root 4 Aug 19 09:12 bin -> sbin drwxrwxr-x 2 root root 0 Aug 19 09:12 dev drwxrwxr-x 3 root root 0 Aug 19 09:12 etc -rw-rw-r-- 1 root root 31 Aug 19 09:12 etc/passwd -rw-r--r-- 1 root root 3116 Aug 19 09:12 etc/lang-table -rw-rw-r-- 1 root root 7 Aug 19 09:12 etc/arch drwxrwxr-x 9 root root 0 Aug 19 09:12 etc/terminfo drwxrwxr-x 2 root root 0 Aug 19 09:12 etc/terminfo/d drwxrwxr-x 2 root root 0 Aug 19 09:12 etc/terminfo/s drwxrwxr-x 2 root root 0 Aug 19 09:12 etc/terminfo/a -rw-r--r-- 1 root root 1481 Aug 19 09:12 etc/terminfo/a/ansi drwxrwxr-x 2 root root 0 Aug 19 09:12 etc/terminfo/v -rw-r--r-- 1 root root 1188 Aug 19 09:12 etc/terminfo/v/vt102 -rw-r--r-- 1 root root 1194 Aug 19 09:12 etc/terminfo/v/vt100 -rw-r--r-- 1 root root 1059 Aug 19 09:12 etc/terminfo/v/vt100-nav drwxrwxr-x 2 root root 0 Aug 19 09:12 etc/terminfo/l -rw-r--r-- 1 root root 1714 Aug 19 09:12 etc/terminfo/l/linux drwxrwxr-x 2 root root 0 Aug 19 09:12 etc/terminfo/b drwxrwxr-x 2 root root 0 Aug 19 09:12 etc/terminfo/x lrwxrwxrwx 1 root root 12 Aug 19 09:12 etc/mtab -> /proc/mounts -rw-r--r-- 1 root root 12741 Aug 19 09:12 etc/keymaps.gz -rw-r--r-- 1 root root 219164 Aug 19 09:12 etc/loader.tr -rw-r--r-- 1 root root 5473 Aug 19 09:12 etc/screenfont.gz lrwxrwxrwx 1 root root 10 Aug 19 09:12 init -> /sbin/init drwxrwxr-x 2 root root 0 Aug 19 09:12 tmp drwxrwxr-x 2 root root 0 Aug 19 09:12 proc ======================================================================== init cpio: premature end of file ========================================================================
Created attachment 359770 [details] output of lsinitrd form kernel 2.6.18-128.1.14.el5
Created attachment 359771 [details] output of lsinitrd form kernel 2.6.18-164.el5
Winfrid, Can you try the following please: 1) boot into a working kernel, so that / is mounted as /dev/mapper/... 2) Fixup blkid and make sure all the other fs are also correctly mounted 3) cp the dmraid attached to comment #24 to /sbin/dmraid.static 4) Regenerate the initrd for the latest kernel like this: mkinitrd -f /boot/initrd-2.6.18-164.EL5.img 2.6.18-164.EL5 And then reboot into kernel-2.6.18-164.EL5, Hopefully this will fix things.
Heinz, I suppose I should find with lsinitrd the command kpartx -a -p p "/dev/mapper/pdc_bjfeeibeeb" and also dmraid -ay -i -p "pdc_bjfeeibeeb" Both commands are still missing in the new initrd. So I suppose a reboot with the new build initrd is useless. PS: lsinitrd command is not on rhel-5 - do you have a version for rhel-5 ? Winfrid
(In reply to comment #37) > Heinz, > > I suppose I should find with lsinitrd the command > kpartx -a -p p "/dev/mapper/pdc_bjfeeibeeb" > and also > dmraid -ay -i -p "pdc_bjfeeibeeb" > > Both commands are still missing in the new initrd. > > So I suppose a reboot with the new build initrd is useless. > > PS: lsinitrd command is not on rhel-5 - do you have a version for rhel-5 ? > > Winfrid Hi Winfrid, I assume this was in response to my (Hans') last comment. What is the output of: /sbin/dmraid.static -ay -i -p -t When run on the booted system ?
Winfried, lsinitrd is not in RHEL5 AFAIK but it's a simple script essentially doing: zcat $initrd_file | cpio --extract --verbose --quiet --list zcat $initrd_file | cpio --extract --verbose --quiet --to-stdout init
Hi Hans & Heinz, First thanks for the lsinitrd bypass - but it looks like it is not needed - I can use /sbin/lsinitrd from fedora 12. I assumed that mkinitrd is the problem so I am now using mkinitrd-5.1.19.6-44 for this test. Also I had a closer look at /sbin/mkinitrd and something is strange : running from commandline : [root@rx220a boot]# /sbin/dmraid.static -ay -i -p -t 2> /dev/null | egrep -iv "^no " pdc_bjfeeibeeb: 0 312581745 linear /dev/sda 0 pdc_djgfieghjb: 0 312581745 linear /dev/sdb 0 After this I modified mkinitrd inserting to "echo" statements if [ "$withdmraid" == "1" ]; then echo Scanning and configuring dmraid supported devices emit "echo Scanning and configuring dmraid supported devices" for x in $(/sbin/dmraid.static -ay -i -p -t 2>/dev/null | \ egrep -iv "^no " | awk -F ':' '{ print $1 }') ; do dmname=$(resolve_dm_name $x) echo $dmname [ -z "$dmname" ] && continue emit "dmraid -ay -i -p \"$dmname\"" emit "kpartx -a -p p \"/dev/mapper/$dmname\"" done fi But I get just the output Scanning and configuring dmraid supported devices
Winfrid, Can you please also put echo "unresolved dmname $s" above the; dmname=$(resolve_dm_name $x) Line and then run mkinitrd again, I think resolve_dm_name is not working properly here. Thanks, Hans
/sbin/mkinitrd -f /boot/initrd-2.6.18-164.el5.img 2.6.18-164.el5 Scanning and configuring dmraid supported devices unresolved dmname unresolved dmname
Oops, sorry that should ofcourse have been add: echo "unresolved dmname $x" above the; dmname=$(resolve_dm_name $x) Notice the $x instead of the $s in my previous comment so sorry, Hans
Hi Hans, I suppose echo "unresolved dmname $s" is a typo it should be echo "unresolved dmname $x" [root@rx220a boot]# /sbin/mkinitrd -f /boot/initrd-2.6.18-164.el5.img 2.6.18-164.el5 Scanning and configuring dmraid supported devices unresolved dmname pdc_bjfeeibeeb unresolved dmname pdc_djgfieghjb
Winfrid, Yes it was a typo, sorry. So it is indeed resolve_dm_name which is to blame, can you create a test.sh file with the following in it: ---begin--- #!/bin/sh . /etc/rc.d/init.d/functions resolve_dm_name pdc_bjfeeibeeb ---end--- And then do: bash -x test.sh &> log and attach the resulting log file here, thanks. Also you are doing this on a system where /dev/mapper/pdc_bjfeeibeeb and /dev/mapper/pdc_djgfieghjb exist, right, iow it was booted with a kernel with a working initrd ?
Created attachment 359809 [details] requested output Hi Hans, To find a proper other system where /dev/mapper/pdc_..... is available maybe not so easy - fedora 12 does not like your script/ coomand not found
Winfrid, so all your attempted mkinitrd runs sofar have been on a system which was not using /dev/mapper/pdc* but using /dev/sda# directly instead ? mkinitrd failing there is not a surprise, I thought you said the system did boot properly when using kernel-2.6.18-128.1.14.el5 ?
Created attachment 359816 [details] test.sh run on fedora 11 Hi Hans, To straiten everything out : following linux distributions work with dmraid installed and Primise SATA RAID : any fedora since I can remember ( at least from fedora 8, okay there were some kernels which did not work ) - there is only some small exception with fedora 12, it starts not with /dev/mapper/pdc_..... - root is assigned to /dev/dm-# openSUSE at least from 10.2 SLES 11 Now to rhel-5 : I think I started beta testing of 5.4 shortly after kernel-2.6.18-128.1.14.el5 so my system used from that time rpms from the 5.4 beta - which might explain why my system still uses with old kernels device-mapping, but all newer kernels form 5.3 ( 2.6.18-128.1.16 and higher ) + all kernels from rhel-5.4 do not activate dmraid
Winfrid, I know that you are doing your best to try and help debug this. But as well intended as it is, you're not helping by running all these different tests, and providing all this info, without answering the questions asked and running the tests asked for. I've a hunch what the problem is, but without the necessary info I cannot be sure. So first of all a few questions, some of which have been asked before, but not answered, and some which have been answered but I would like confirmed to be sure. 1) This bug is about, and all test so far have been run on, a system with a promise BIOS RAID controllor, with 2 striped raid sets each consisting of a single disk (so there is no RAID going on, but the promise controller cannot boot from a disk without it being part of a RAID set). Correct ? 2) In comment 9, you say that the system does boot properly using kernel-2.6.18-128.1.14.el5, when you boot it with this kernel, what is the output of: ls -l /dev/mapper/pdc_bjfeeibeeb* 3) If the output of 2) is not: "no such file or directory", can you please run: mkinitrd -f /boot/initrd-2.6.18-164.el5.img 2.6.18-164.el5 And then do: lsinitrd /boot/initrd-2.6.18-164.el5.img > log and attach the output here ?
1 ) Correct - but if you feel better I could repeat the test also with a RAID1 2 ) [root@rx220a ~]# uname -a Linux rx220a 2.6.18-128.1.14.el5 #1 SMP Mon Jun 1 15:52:58 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux [root@rx220a ~]# ls /dev/mapper/ control pdc_bjfeeibeebp10 pdc_bjfeeibeebp5 pdc_bjfeeibeebp8 pdc_bjfeeibeeb pdc_bjfeeibeebp2 pdc_bjfeeibeebp6 pdc_bjfeeibeebp9 pdc_bjfeeibeebp1 pdc_bjfeeibeebp3 pdc_bjfeeibeebp7 Note : The second RAID-disk is currently not online, but this has to do with comment #38 and that my system is 5.4 - only the kernel is old.
Created attachment 359980 [details] requested output from lsinitrd ( kernel-2.6.18-164.el5 ) [root@rx220a ~]# mkinitrd -f /boot/initrd-2.6.18-164.el5.img 2.6.18-164.el5 [root@rx220a ~]# /mnt3/sbin/lsinitrd /boot/initrd-2.6.18-164.el5.img > /tmp/log init
(In reply to comment #52) > Created an attachment (id=359980) [details] > requested output from lsinitrd ( kernel-2.6.18-164.el5 ) > > [root@rx220a ~]# mkinitrd -f /boot/initrd-2.6.18-164.el5.img 2.6.18-164.el5 > [root@rx220a ~]# /mnt3/sbin/lsinitrd /boot/initrd-2.6.18-164.el5.img > /tmp/log > init 1) I assume this was done running under 2.6.18-128.1.14.el5 correct ? 2) Can you please try again (while running under 2.6.18-128.1.14.el5) to run the script I gave in comment #46 ? and if it fails can you please run it with just bash instead of bash -x and report how it fails ? Thanks, Hans
Created attachment 359981 [details] requested output form kernel-2.6.18-128.1.14.el5 mkinitrd -f /boot/initrd-2.6.18-128.1.14rerun.el5.img 2.6.18-128.1.14 /mnt3/sbin/lsinitrd /boot/initrd-2.6.18-128.1.14rerun.el5.img > /tmp/log-2.6.18-128.1.14rerun It looks like, that the current mkinitrd has a problem or ?
(In reply to comment #54) > Created an attachment (id=359981) [details] > requested output form kernel-2.6.18-128.1.14.el5 > > mkinitrd -f /boot/initrd-2.6.18-128.1.14rerun.el5.img 2.6.18-128.1.14 > /mnt3/sbin/lsinitrd /boot/initrd-2.6.18-128.1.14rerun.el5.img > > /tmp/log-2.6.18-128.1.14rerun > > It looks like, that the current mkinitrd has a problem or ? Yes, but this caused by dm_resolve_name failing which is outside of mkinitrd, can you please boot into kernel 2.6.18-128.1.14.el5, if not running that already, and then run the script I gave in comment #46 ? and if it fails can you please run it with just "bash test.sh" instead of "bash -x test.sh" and report how it fails ?
Created attachment 359989 [details] requested output from test.sh
(In reply to comment #19) > Created an attachment (id=356199) [details] > Test binaray for pdc single disk RAID0 type RAID sets > > Winfrid, > > this x86_64 static binary has a detection for pdc single disks RAID0 sets in > and changes the mapping to linear. > Heinz, I just got some anaconda logs of a failed RHEL-5.4 install on this system from Winfrid by mail, can you please do a scratch build of dmraid for RHEL-5.4 with the "pdc single disks RAID0 sets in and changes the mapping to linear" fix included, then I can pickup libdmraid.so from that and do an updates.img for Winfrid from that. I expect / hope that once he can do a fresh, non bugged install of 5.4 from scratch using an updates.img, that the mkinitrd issues will then go away by themselves. Thanks, Hans Note to self: Winfrid is using x86_64
(In reply to comment #57) > (In reply to comment #19) > > Created an attachment (id=356199) [details] [details] > > Test binaray for pdc single disk RAID0 type RAID sets > > > > Winfrid, > > > > this x86_64 static binary has a detection for pdc single disks RAID0 sets in > > and changes the mapping to linear. > > > > Heinz, > > I just got some anaconda logs of a failed RHEL-5.4 install on this system from > Winfrid by mail, can you please do a scratch build of dmraid for RHEL-5.4 with > the "pdc single disks RAID0 sets in and changes the mapping to linear" fix > included, then I can pickup libdmraid.so from that and do an updates.img for > Winfrid from that. Hans, packages with that fix built. Please fetch from scratch/heinzm/task_1979277 Heinz > I expect / hope that once he can do a fresh, non bugged > install of 5.4 from scratch using an updates.img, that the mkinitrd issues will > then go away by themselves. > > Thanks, > > Hans > > > Note to self: Winfrid is using x86_64
Winfrid, It turns out that we have an exact same machine as you are experiencing this issue with in one of our labs. If you would have reported this bug through your TAM, he would have been able to reproduce it there, so next time please follow that path. I'm getting this machine setup with a single disk raid set config and 5.3 installed on it, and then I'll debug things further there. In the mean time I've already created an updates.img for 5.4, so if you want to you can give that a try, download: http://people.atrpms.net/~hdegoede/updates509962-x86_64.img And dd it to a floppy then add "updates" to the cmdline when starting the 5.4 installer, and point it to fd0 when asked for your updates disk. After this it should hopefully get past the traceback you mailed me. Note if the install completes successfully do not reboot! Instead go to tty2, and do: 1) copy this file: http://people.atrpms.net/~hdegoede/dmraid-1.0.0.rc13-53_bz509962.el5.x86_64.rpm To /mnt/sysimage 2) chroot /mnt/sysimage 3) mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r) If the kernel in the installer is a different version then the one in /mnt/sysimage/boot (see existing initrd) replace $(uname -r) with the kernel version in /mnt/sysimage
Sorry Hans, I followed the "official" path via issue-tracker and it did not work at all - the previous TAM ( Anders Karlson asked me for an rx220 which I could not provide, also Gary Smith our new TAM was already involved in the dmraid problems ), after more then one year were nothing had changed I simply gave up, because at that time in fedora 11 the problem was fixed. After this experience I decided not to use any more issue-tracker. Nevertheless our TAM was all the time on CC. Have a nice evening, Winfrid
Created attachment 360676 [details] Proposed activation fix
(In reply to comment #61) > Created an attachment (id=360676) [details] > Proposed activation fix Tested positive on the test system mentioned in comment#59 which booted ok with all pdc ATARAID partitions mapped correctly and mounted/used as PV: [root@dhcp-151 boot]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/pdc_eficdaadcp2 VolGroup00 lvm2 a- 74.31G 0 [root@dhcp-151 boot]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/pdc_eficdaadcp2 VolGroup00 lvm2 a- 74.31G 0 [root@dhcp-151 boot]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73481392 2333872 67354624 4% / /dev/mapper/pdc_eficdaadcp1 101086 26961 68906 29% /boot tmpfs 447888 0 447888 0% /dev/shm /dev/hda 3467786 3467786 0 100% /media/RHEL_5.3 x86_64 DVD Because this configuration (RAID0 with 1 disk) is exotic, 5.5 inclusion seems appropriate.
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release.
Fix in repository. Will build once bz#513402 got ack'ed.
With help of Garry Smith who has the physical access to hardware having this specific (Promise FastTrak S150 TX4) controller it has been verified that: --the installation does not correctly recognize ataraid stripe made with only one disk and installation is not possible --in the latest rhel5.5rc (kernel .194.el5) ataraid volume is correctly recognized by anaconda and installation/boot goes as expected
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2010-0178.html
This issue is really fixed by the fix for bug 513402 (see comment#68) in the dmraid package. All comments related to the kernel erratum RHSA-2010-0178 such as comment#72, comment#74 are point to the wrong erratum. Fixing the bugzilla state. *** This bug has been marked as a duplicate of bug 513402 ***