Description of problem: I installed Fedora 11 after backing up data on system (full wipe install). The system was previously running Fedora 10 and raid1 was used on the two disks - both 36GB SCSI. For this install, I rejiggered the partitions during the initial install - and anaconda wrote new RAID tags/flags to the disks. The installation proceeded well - no problems - with /boot on /dev/dm0 using ext3 and swap on /dev/dm1 and / on lvm on /dev/dm2 When I run dmraid -r, it says 'no raid disks' However: # dmesg | grep raid raid1: raid set md2 active with 2 out of 2 mirrors raid1: raid set md1 active with 2 out of 2 mirrors raid1: raid set md0 active with 2 out of 2 mirrors but # dmesg | grep partition md2: unknown partition table md1: unknown partition table md0: unknown partition table I would like to restore the backed up data to this system so it can be returned to full service, but I need to know that the raid is working. Version-Release number of selected component (if applicable): # dmraid -V dmraid version: 1.0.0.rc15 (2008-09-17) debug dmraid library version: 1.0.0.rc15 (2008.09.17) device-mapper version: 4.14.0 How reproducible: Haven't seen anything different Steps to Reproduce: 1. Wipe install Fedora 11, repartitioning disks as raid 2. boot up Fedora 11 3. check to see if raid is actually working Actual results: as reported Expected results: A reliable and confidence enhancing report about the condition of the raided drives. Additional info:
Continuing research: The best way to determine if raid is operating is to disable each disk in turn and see if the system can boot/run/operate correctly. I have another - slightly different system which has also been wipe installed with Fedora 11. The above dmraid and dmesg output is the same for this other system. (The reason for chosing this system is that it is closer - by my right foot rather than on the other side of the desk. And it has SCA SCSI disks that can be pulled out of their sockets. The disks are easier to disable than the other much older system) When I powered down, pulled out the lowest disk, and rebooted: I got: Hard Disk Error No grub, no nothing. Then I powered down, pushed in the lowest disk and pulled out the highest disk, and rebooted. This time I got grub> [ This seems to indicate that Anaconda did NOT write grub to the MBR of BOTH raided disks. I will check on this later. ] fiddling around with grub and finally entering: grub> root (hd0,0) grub> kernel /vmlinuz-2.6.29.5-191.fc11.i686.PAE raid=noautodetect ro root=/dev/mapper/rootvg-root rhgb grub> initrd /initrd-2.6.29.5-191.fc11.i686.PAE.img grub> boot It did boot up and looked fine (the /rootvg-root name came from the previously unchanged partitions from Fedora 10. Otherwise it was a wipe install) # dmesg | grep raid Kernel command line: raid=noautodetect ro root=/dev/mapper/rootvg-root rhgb md: raid1 personality registered for level 1 raid1: raid set md2 active with 1 out of 2 mirrors raid1: raid set md1 active with 1 out of 2 mirrors raid1: raid set md0 active with 1 out of 2 mirrors It looks like the raid is doing what it should do. I have a couple more tests to make.
Hmm, bringing up the system with both disks operating - it boots normally. But: # dmesg | grep raid md: raid1 personality registered for level 1 raid1: raid set md2 active with 1 out of 2 mirrors raid1: raid set md1 active with 2 out of 2 mirrors raid1: raid set md0 active with 1 out of 2 mirrors The md1 partition is the swap area. Seems like it is treated differently. raid is not operating on two critical partitions. It would be nice if there was some sort of notification about this somewhere, without having to explicitly check. Maybe there is a log parser tool out there somewhere.. It is curious that the rebuild of the 'failed' drive did not start automatically, yes? Still continuing to research.
OK, the reason dmraid -r gives 'no raid disks' is because dmraid is for hardware raid systems and these two systems don't have hardware raid. I have another system (not upgraded yet) which does have hardware (motherboard) raid and in the recent past I have been working with that. ---- For software raid, the pertinent utility is mdadm which currently gives # mdadm --misc --detail /dev/md0 ... ... Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 1 1 active sync /dev/sda1 However, there are still the other issues no boot when a particular disk removed from array no automatic notification that raid is running degraded. Perhaps new bugs need to be opened for these..
FYI: dmraid is for software RAID too, *but* it supports various ATARAID and the DDF formats, not MD type software RAID. See "dmraid -l" for supported formats.