Description of problem: I have a Fujitsu Primergy RX200S4 with 00:1f.2 RAID bus controller: Intel Corporation 631xESB/632xESB SATA RAID Controller (rev 09) Due to a hardware shortage I just put in 1 SATA disk with RAID0. Version-Release number of selected component (if applicable): How reproducible: always on systems with ESB2 or ICHXR SATA RAID (ddf1) Steps to Reproduce: 1. Have at least 1 disk RAID0 formatted + GPT (did not test if the problem exists without GPT) 2. Install fedora Actual results: After first boot the system comes up without RAID Expected results: System is started with RAID Additional info: dmraid -b /dev/sda: 156301488 total, "5JVEY7CL" dmraid -r /dev/sda: ddf1, ".ddf1_disks", GROUP, ok, 154296320 sectors, data@ 0 [root@rx200s4 ~]# dmraid -s -c -c -c ddf1_4c5349202020202080862682000000003763983d00000a28:154296320:128:stripe:ok:0:1:0 /dev/sda:ddf1:ddf1_4c5349202020202080862682000000003763983d00000a28:stripe:ok:154296320:0 [root@rx200s4 ~]# ls /dev/mapper/ control Content of /boot/grub/grub.conf : # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You do not have a /boot partition. This means that # all kernel and initrd paths are relative to /, eg. # root (hd0,4) # kernel /boot/vmlinuz-version ro root=/dev/mapper/ddf1_4c5349202020202080862682000000003763983c00000a28p5 # initrd /boot/initrd-version.img #boot=/dev/ddf1_4c5349202020202080862682000000003763983c00000a28 default=0 timeout=0 splashimage=(hd0,4)/boot/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.29.4-167.fc11.x86_64) root (hd0,4) kernel /boot/vmlinuz-2.6.29.4-167.fc11.x86_64 ro root=UUID=e26a98325481-457d-bf65-233e71693db5 rhgb quiet initrd /boot/initrd-2.6.29.4-167.fc11.x86_64.img
RAID0 needs at least to disks to stripe on. You can have a linear mapping on just one disk. Please provide "dmraid -s" for the full status and "dmraid -tay" output in order to have the mapping table for completeness.
dmraid -s *** Group superset .ddf1_disks --> Subset name : ddf1_4c5349202020202080862682000000003763b66800000a28 size : 154296320 stride : 128 type : stripe status : ok subsets: 0 devs : 1 spares : 0 [root@rx200s4 ~]# dmraid -tay ddf1_4c5349202020202080862682000000003763b66800000a28: 0 154296320 linear /dev/sda 0
BTW - there are also other problems with my configuration, e.g. I am not able to mount any other partition on the disk : [root@rx200s4 ~]# mount /dev/sda2 /mnt1 mount: you must specify the filesystem type [root@rx200s4 ~]# mount -t ext4 /dev/sda2 /mnt1 mount: special device /dev/sda2 does not exist
(In reply to comment #3) > BTW - there are also other problems with my configuration, e.g. > I am not able to mount any other partition on the disk : > > [root@rx200s4 ~]# mount /dev/sda2 /mnt1 > mount: you must specify the filesystem type > [root@rx200s4 ~]# mount -t ext4 /dev/sda2 /mnt1 > mount: special device /dev/sda2 does not exist The mapping table in comment #2 shows, it's mapping the whole device /dev/sda at offset 0, not a partition. So, if the whole device is being used with DDF format and that's being partitioned, only those can be mounted, no partitions on the underlying device. BTW: "dmraid -ay" doesn't work on that setup ?
dmraid -ay -v -v -v WARN: locking /var/lock/dmraid/.lock NOTICE: /dev/sda: asr discovering NOTICE: /dev/sda: ddf1 discovering NOTICE: /dev/sda: ddf1 metadata discovered NOTICE: /dev/sda: hpt37x discovering NOTICE: /dev/sda: hpt45x discovering NOTICE: /dev/sda: isw discovering NOTICE: /dev/sda: jmicron discovering NOTICE: /dev/sda: lsi discovering NOTICE: /dev/sda: nvidia discovering NOTICE: /dev/sda: pdc discovering NOTICE: /dev/sda: sil discovering NOTICE: /dev/sda: via discovering NOTICE: added /dev/sda to RAID set ".ddf1_disks" RAID set "ddf1_4c534920202020208086268200000000376a2f2100000a28" was not activated INFO: Activating GROUP raid set ".ddf1_disks" WARN: unlocking /var/lock/dmraid/.lock
Just a remark - once a partition has been mounted without an activated dmraid, it will mount even with an activated dmraid to /dev/sda# unless you fix /etc/blkid/blkid.tab and /etc/mtab
In the meantime I switched on my rx200s4 to rhel-6-alpha1 and I see the same problems as on fedora, but maybe I have now additional information which might help to solve the problem. rhel-6 installs on a linear RAID ( don't blame me about this - I just have a shortage of SATA disks ) without problems. But first boot fails - in my test I saw that fsck fails, because UUID=bc5bba77-a371-4832-bc63-32301a9a036c (/boot) and UUID=73cfc698-6c53-471e-a81e-f8c181e6965c (/) are not found, so I disabled fsck in /etc/fstab and now the system comes up, so I can complete the installation. The only problem I have is that dmraid is not activated and root (/) is mounted on /dev/sda5 and boot (/boot) is mounted on /dev/sda1 Now I did two additional tests : I booted the current ubuntu 9.10 (dvd from Aug.4) and the live install had no problems accessing the partitions /dev/mapper/ddf1_............p1/5 Next I tried the same with rhel6.0 in rescue mode, also here dmraid had activated the disk. For more details please see the attachments
Created attachment 356310 [details] dmraid command log under ubuntu 9.10 beta (64bit) - live dvd
Created attachment 356311 [details] dmraid DUMP under ubuntu 9.10
Created attachment 356312 [details] dmraid commands under rhel 6.0 (64bit)
Created attachment 356313 [details] dmraid DUMP created under RHEL6.0
[root@rx200s4 ~]# dmraid -ay RAID set "ddf1_4c53492020202020808626820000000037aae4a000000a28" was not activated [root@rx200s4 ~]# ./dmraid_bz509962a.static -ay -v -v -v WARN: locking /var/lock/dmraid/.lock NOTICE: /dev/sda: asr discovering NOTICE: /dev/sda: ddf1 discovering NOTICE: /dev/sda: ddf1 metadata discovered NOTICE: /dev/sda: hpt37x discovering NOTICE: /dev/sda: hpt45x discovering NOTICE: /dev/sda: isw discovering NOTICE: /dev/sda: jmicron discovering NOTICE: /dev/sda: lsi discovering NOTICE: /dev/sda: nvidia discovering NOTICE: /dev/sda: pdc discovering NOTICE: /dev/sda: sil discovering NOTICE: /dev/sda: via discovering NOTICE: added /dev/sda to RAID set ".ddf1_disks" RAID set "ddf1_4c53492020202020808626820000000037aae4a000000a28" was not activated INFO: Activating GROUP raid set ".ddf1_disks" WARN: unlocking /var/lock/dmraid/.lock
So this is again a BIOS RAID setup with a stripe set using only one disk, this seems like the same issue as bug 509962 to me.
Hello Hans, Basically you are correct - but there is at least one difference - in case of #509962 ( Promise FastTrak S150 TX4 ), there are kernels (and distributions) around which work, but in case of ESB2/ICH and LSI Firmware, I found until now no working Linux Distribution. Was already checked if the device-mapping was correct ? Winfrid
Just a remark for Gary - this is a general problem which occurs on any redhat enterprise linux ( 5 and 6 ) + fedora ( 9, 10, 11, 12 ) and any other Linux Distribution I tried ( mainly from Novell )
(In reply to comment #15) > Just a remark for Gary - > this is a general problem which occurs on any > redhat enterprise linux ( 5 and 6 ) + > fedora ( 9, 10, 11, 12 ) > and any other Linux Distribution I tried ( mainly from Novell ) I have to agree for RHEL5.3. This is because we introduced the -Z|--rm_partitions to dmraid in 5.4, which removes any partition mappings for the component devices. dmraid (and dmsetup) fail to activate a mapping in case any such partition mappings are active. dmraid -Z ddf1_... is mandatory to activate such RAID set. Winfrid, have you tried with RHEL5.4 yet ?
Updated test box to 5.4 but still no DDF mapping, even though "dmraid --rm_partition" in initrd. Need Hans to look at installer logs.
(In reply to comment #16) > (In reply to comment #15) > > Just a remark for Gary - > > this is a general problem which occurs on any > > redhat enterprise linux ( 5 and 6 ) + > > fedora ( 9, 10, 11, 12 ) > > and any other Linux Distribution I tried ( mainly from Novell ) > I have to agree for RHEL5.3. > This is because we introduced the -Z|--rm_partitions to dmraid in 5.4, which > removes any partition mappings for the component devices. dmraid (and dmsetup) > fail to activate a mapping in case any such partition mappings are active. > dmraid -Z ddf1_... is mandatory to activate such RAID set. > Winfrid, > have you tried with RHEL5.4 yet ? Yes, but configuration is a little bit changed test with RAID0 (2 disks) and RAID1 (2 disks) and no GPT : The installation ( part1 ) is for both configurations ok. But on first boot I get the following error message ddf1 wrong # of devices ddf1_.............. [1/2] [sda] and the same message for sdb. In case of RAID0 the kernel crashes. In case of RAID1 the boot continues and the system comes up with /boot on /dev/sda1 and / on /dev/sda2 Just a small question - is there any reason to handle a RAID0 with one disk different to RAID0 with 2 or more disks ?
(In reply to comment #19) > (In reply to comment #16) > > (In reply to comment #15) > > > Just a remark for Gary - > > > this is a general problem which occurs on any > > > redhat enterprise linux ( 5 and 6 ) + > > > fedora ( 9, 10, 11, 12 ) > > > and any other Linux Distribution I tried ( mainly from Novell ) > > I have to agree for RHEL5.3. > > This is because we introduced the -Z|--rm_partitions to dmraid in 5.4, which > > removes any partition mappings for the component devices. dmraid (and dmsetup) > > fail to activate a mapping in case any such partition mappings are active. > > dmraid -Z ddf1_... is mandatory to activate such RAID set. > > Winfrid, > > have you tried with RHEL5.4 yet ? > > Yes, but configuration is a little bit changed > test with RAID0 (2 disks) and RAID1 (2 disks) and no GPT : > > The installation ( part1 ) is for both configurations ok. > > But on first boot I get the following error message > ddf1 wrong # of devices ddf1_.............. [1/2] [sda] and the same message > for sdb. > In case of RAID0 the kernel crashes. Do you have the vmcore of that crash to analyze ? > In case of RAID1 the boot continues and the system comes up with /boot on > /dev/sda1 and / on /dev/sda2 That's the issue with component device partitions being active, hence preventing pre-RHEL5.4 dmraid from deactivating those to allow for partition mappings being created via kpartx after dmraid activated the basic mirror mapping underneath. > > Just a small question - is there any reason to handle a RAID0 with one disk > different to RAID0 with 2 or more disks ? There's actually just the fact that for 1 disk, the device-mapper linear target needs to be used whereas for > 1 disk, the striped target needs using. That's why I came up with a solution in the generic libdmraid layer rather than in the metadata format handler as initally tried (see attachment to comment#62 of bz509962 for the fix.
No I have no vmcore - just a question, would it be helpful to install fedora 11/12 or rhel-6 if I should use a boot.iso from rawhide I need some advice, because the system is behind a proxy and it looks like I have an unsolved problem with apache - but it should work with nfs using method=nfs:my-nfs-server.mydomain.net:/os having everything you see e.g. on ftp://ftp.tu-chemnitz.de/pub/fedora/linux/development/x86_64/os copied to /os
Winfird, we now have a machine setup where we can reproduce this, and I have remote access to this machine. I'm currently working on some other things, but later this afternoon, or otherwise tomorrow I'll investigate this furter, so for now we do not need you to do any further testing. Thanks!
Hello Hans, Just some remarks - I am not sure what RedHat's policy concerning linear RAID is - personally I think, the only platform, where linear RAID is needed, is Promise FastTrak S150 - but there may be also other platforms. Anyway linear RAID shows similar problems on ESB2/ICH (LSI firmware) as we see with Promise FastTrak S150 There is one item that should be tested on ESB2/ICH - 2 raidsets on rhel-5 Winfrid
Thanks to remote access to a machine with the hardware provided by GSS, we have been able to finally fix this, thanks Gary! This is fixed in dmraid-1.0.0.rc16-4.fc12: http://koji.fedoraproject.org/koji/buildinfo?buildID=136909 Rel-eng nominating this for F12Blocker, as without this fix installation on LSI BIOS RAID sets will not be possible.
Closing: the fixed dmraid has been tagged into final F12. -- Fedora Bugzappers volunteer triage team https://fedoraproject.org/wiki/BugZappers