Description of problem: [root@hayes-01 bin]# pvscan PV /dev/etherd/e1.1p9 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p8 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p7 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p6 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p5 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p4 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p3 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p2 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p10 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] PV /dev/etherd/e1.1p1 VG raid_sanity lvm2 [908.23 GiB / 908.23 GiB free] [root@hayes-01 bin]# lvcreate --test --type raid1 -m 1 -n testraid -L 500M raid_sanity Test mode: Metadata will NOT be updated and volumes will not be (de)activated. Failed to activate raid_sanity/testraid_rmeta_0 for clearing [root@hayes-01 bin]# echo $? 5 [root@hayes-01 bin]# lvcreate --test --type raid5 -i 2 -n testraid -L 500M raid_sanity Test mode: Metadata will NOT be updated and volumes will not be (de)activated. Using default stripesize 64.00 KiB Rounding size (125 extents) up to stripe boundary size (126 extents) Failed to activate raid_sanity/testraid_rmeta_0 for clearing [root@hayes-01 bin]# echo $? 5 Version-Release number of selected component (if applicable): 2.6.32-278.el6.x86_64 lvm2-2.02.95-10.el6 BUILT: Fri May 18 03:26:00 CDT 2012 lvm2-libs-2.02.95-10.el6 BUILT: Fri May 18 03:26:00 CDT 2012 lvm2-cluster-2.02.95-10.el6 BUILT: Fri May 18 03:26:00 CDT 2012 udev-147-2.41.el6 BUILT: Thu Mar 1 13:01:08 CST 2012 device-mapper-1.02.74-10.el6 BUILT: Fri May 18 03:26:00 CDT 2012 device-mapper-libs-1.02.74-10.el6 BUILT: Fri May 18 03:26:00 CDT 2012 device-mapper-event-1.02.74-10.el6 BUILT: Fri May 18 03:26:00 CDT 2012 device-mapper-event-libs-1.02.74-10.el6 BUILT: Fri May 18 03:26:00 CDT 2012 cmirror-2.02.95-10.el6 BUILT: Fri May 18 03:26:00 CDT 2012
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux.
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development. This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.
Does this work for 'mirror' segment type?
Yes it does. [root@taft-01 ~]# lvcreate --test --type mirror -m 1 -n testmirror -L 500M taft Test mode: Metadata will NOT be updated and volumes will not be (de)activated. Logical volume "testmirror" created [root@taft-01 ~]# lvcreate --test --type mirror -m 2 -n testmirror -L 500M taft Test mode: Metadata will NOT be updated and volumes will not be (de)activated. Logical volume "testmirror" created
commit b49b98d50c558a142d0a2ef55279eea00bd36eba Author: Jonathan Brassow <jbrassow> Date: Wed Sep 5 14:32:06 2012 -0500 RAID: '--test' should not cause a valid create command to fail It is necessary when creating a RAID LV to clear the new metadata areas. Failure to do so could result in a prepopulated bitmap that would cause the new array to skip syncing portions of the array. It is a requirement that the metadata LVs be activated and cleared in the process of creating. However in test mode, this requirement should be lifted - no new LVs should be created or written to.
Unit tests: [root@hayes-01 lvm2]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lv_home vg_hayes01 -wi-ao-- 16.14g lv_root vg_hayes01 -wi-ao-- 50.00g lv_swap vg_hayes01 -wi-ao-- 7.88g [root@hayes-01 lvm2]# lvcreate --type raid4 -L 200M -i 2 -n lv vg --test TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Using default stripesize 64.00 KiB Logical volume "lv" created [root@hayes-01 lvm2]# lvcreate --type raid5 -L 200M -i 2 -n lv vg --test TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Using default stripesize 64.00 KiB Logical volume "lv" created [root@hayes-01 lvm2]# lvcreate --type raid6 -L 200M -i 2 -n lv vg --test TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Using default stripesize 64.00 KiB Number of stripes must be at least 3 for raid6 Run `lvcreate --help' for more information. [root@hayes-01 lvm2]# lvcreate --type raid6 -L 200M -i 3 -n lv vg --test TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Using default stripesize 64.00 KiB Rounding size (50 extents) up to stripe boundary size (51 extents) Logical volume "lv" created [root@hayes-01 lvm2]# lvcreate --type raid10 -L 200M -m 1 -i 3 -n lv vg --test TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Using default stripesize 64.00 KiB Rounding size (50 extents) up to stripe boundary size (51 extents) Logical volume "lv" created
Fix verified in the latest rpms. 2.6.32-339.el6.x86_64 lvm2-2.02.98-3.el6 BUILT: Mon Nov 5 06:45:48 CST 2012 lvm2-libs-2.02.98-3.el6 BUILT: Mon Nov 5 06:45:48 CST 2012 lvm2-cluster-2.02.98-3.el6 BUILT: Mon Nov 5 06:45:48 CST 2012 udev-147-2.43.el6 BUILT: Thu Oct 11 05:59:38 CDT 2012 device-mapper-1.02.77-3.el6 BUILT: Mon Nov 5 06:45:48 CST 2012 device-mapper-libs-1.02.77-3.el6 BUILT: Mon Nov 5 06:45:48 CST 2012 device-mapper-event-1.02.77-3.el6 BUILT: Mon Nov 5 06:45:48 CST 2012 device-mapper-event-libs-1.02.77-3.el6 BUILT: Mon Nov 5 06:45:48 CST 2012 cmirror-2.02.98-3.el6 BUILT: Mon Nov 5 06:45:48 CST 2012
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0501.html