I did a quick check for device-mapper bugs and did not find anything related to this issue. Sorry if it is duped. To reproduce just execute `dmraid -ay` on a system that has RAID1. On my system it fails with the following errors: <snip> device-mapper: table: 253:0: mirror: Wrong number of mirror arguments device-mapper: ioctl: error adding target to table. </snip> This python script also reproduces the same error (same errors as above): <snip> #!/usr/bin/python import block rs = block.getRaidSets([]) rs0 = rs[0] rs0.activate(mknod=True) </snip> This breaks installation to RAID1 devices. This is the reason I put all the flags on ?.
Joel, you need kernel 123.el5 and dmraid-1.0.0.rc13-16.el5
Everyone: I'm Terribly sorry about this, This *does* have to do with the fact that the brno mirror is out of sync. I have made tests with current kernel and dmraid and everything works fine. Again sorry for the noise.
I disagree. What if someone installs the update, then has some kernel problem so wants to reboot using the previous (working) 5.2 kernel without downgrading all their packages? This is ABI breakage that needs mitigating - a blocker. LVM2 should not suffer from this because in conjunction with libdevmapper it detects what version of the kernel module it is and supplies the appropriate arguments. dmraid should do the same.
But is this actually a kernel issue requiring a kernel update as requested above - or just a userspace regression introduced after 5.2 and fixed in 1.0.0.rc13-15 ?
Alasdair, with respect to your comment #7: dmraid-1.0.0.rc13-16.el5 work on both actual and older RHEL5 kernels, there's no KABI breakage. in dmraid-1.0.0.rc13-14.el5 I mistakenly introduced a regression because of the different KABIs in upstream/RHEL with respect to mirror error handling activation, which has been fixed in dmraid-1.0.0.rc13-16.el5 recommended in comment #5. The 123.el5 kernel provides the extended RAID0/1 status output and needs testing to avoid any related regressions with dmraid, lvm2, ... I think that addresses you question in comment #9 as well.
Closing this, based on comment 10 (this problem was introduced early in 5.3, and is now fixed in 5.3. dmraid remains compatible with earlier RHEL 5 kernels), and comment 12 (earlier RHEL 5 kernels have block_on_error, so dmraid remains compatible). Feel free to followup with additional questions and re-open if necessary.