Description of problem: Version-Release number of selected component (if applicable): 4.11.0-ioctl (2006-09-14) multipath: version 1.0.5 multipath round-robin: version 1.0.0 How reproducible: always Steps to Reproduce: 1.create striped logical volume 2.format with gfs file system 3.add entry in fstab 4.reboot the mount is there 5.install and configure multipath 6.reboot and mount is not available Actual results: mount fails at reboot and logs show this device-mapper: ioctl: 4.11.0-ioctl (2006-09-14) initialised: dm-devel device-mapper: multipath: version 1.0.5 loaded device-mapper: multipath round-robin: version 1.0.0 loaded device-mapper: table: 253:13: striped: Couldn't parse stripe destination device-mapper: ioctl: error adding target to table Expected results: logical volume is build and mounted even if multipath is active Additional info: if we remove multipath the mount point is there. when we reinstate multipath the object is there since a lvdisplay shows that is is error free but the link in/dev/mapper is not created correctly.
I'm seeing the same behaviour with 2.6.27.5-117.fc10.i686 .. I uninstalled multi-path and no change. My LVM's are partitioned and have JFS filesystems (not GFS): dmesg shows messages like: device-mapper: table: 253:2: striped: Couldn't parse stripe destination device-mapper: ioctl: error adding target to table vgchange shows: [root@asparagus ~]# vgchange -a y device-mapper: reload ioctl failed: No such device or address device-mapper: reload ioctl failed: No such device or address device-mapper: reload ioctl failed: No such device or address device-mapper: reload ioctl failed: No such device or address device-mapper: reload ioctl failed: No such device or address 5 logical volume(s) in volume group "250GBx2" now active device-mapper: reload ioctl failed: No such device or address device-mapper: reload ioctl failed: No such device or address device-mapper: reload ioctl failed: No such device or address 3 logical volume(s) in volume group "750GBx2" now active 2 logical volume(s) in volume group "VolGroup00" now active vgscan finds them ok [root@asparagus ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "250GBx2" using metadata type lvm2 Found volume group "750GBx2" using metadata type lvm2 Found volume group "VolGroup00" using metadata type lvm2 pvscan finds them ok [root@asparagus ~]# pvscan PV /dev/sdf1 VG 250GBx2 lvm2 [232.88 GB / 0 free] PV /dev/sdg1 VG 250GBx2 lvm2 [232.88 GB / 0 free] PV /dev/sde1 VG 750GBx2 lvm2 [698.63 GB / 0 free] PV /dev/sdd1 VG 750GBx2 lvm2 [698.63 GB / 0 free] PV /dev/sda2 VG VolGroup00 lvm2 [37.97 GB / 32.00 MB free] Total: 5 [1.86 TB] / in use: 5 [1.86 TB] / in no VG: 0 [0 ] lvscan shows ok as well [root@asparagus ~]# lvscan ACTIVE '/dev/250GBx2/250GBx2_Vol00' [100.00 GB] inherit ACTIVE '/dev/250GBx2/250GBx2_Vol01' [100.00 GB] inherit ACTIVE '/dev/250GBx2/250GBx2_Vol02' [100.00 GB] inherit ACTIVE '/dev/250GBx2/250GBx2_Vol03' [100.00 GB] inherit ACTIVE '/dev/250GBx2/250GBx2_Vol04' [65.77 GB] inherit ACTIVE '/dev/750GBx2/750GBx2_Vol00' [500.00 GB] inherit ACTIVE '/dev/750GBx2/750GBx2_Vol01' [500.00 GB] inherit ACTIVE '/dev/750GBx2/750GBx2_Vol02' [397.27 GB] inherit ACTIVE '/dev/VolGroup00/LogVol00' [36.00 GB] inherit ACTIVE '/dev/VolGroup00/LogVol01' [1.94 GB] inherit Kernel version is: Linux asparagus 2.6.27.5-117.fc10.i686 #1 SMP Tue Nov 18 12:19:59 EST 2008 i686 athlon i386 GNU/Linux
Still a problem? Are the device-mapper, LVM, and multipath packages up-to-date? Could there be a version mismatch?
Please update also your kernel (default update is set to not update kernel IIRC), there was some unexpected incompatibility with very old RHEL5 kernel and new userspace. (comment #1 - 2.6.27.5-117.fc10.i686 is Fedora kernel, using Fedora kernel with RHEL5 is not supported configuration at all.)