RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 834050 - Unable to create striped raid on VGs with 1k extent sizes
Summary: Unable to create striped raid on VGs with 1k extent sizes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.3
Hardware: x86_64
OS: Linux
high
low
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 1067112 BrassowRHEL6Bugs
TreeView+ depends on / blocked
 
Reported: 2012-06-20 17:57 UTC by Corey Marthaler
Modified: 2014-10-14 08:23 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.109-2.el6
Doc Type: Bug Fix
Doc Text:
No doc text required. RAID with stripe size < page_size has always been disallowed, but when VG extent size was < page size a failure was allowed to happen.
Clone Of:
: 1067112 (view as bug list)
Environment:
Last Closed: 2014-10-14 08:23:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1387 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2014-10-14 01:39:47 UTC

Description Corey Marthaler 2012-06-20 17:57:43 UTC
Description of problem:
This test case exists because of bugs like 737125 and 750613 and it works for raid1 volume creation, but not the striped raids (4|5|6).

Recreating PVs/VG with smaller (1K) extent size
hayes-01: pvcreate --setphysicalvolumesize 1G /dev/etherd/e1.1p1 /dev/etherd/e1.1p10 /dev/etherd/e1.1p2 /dev/etherd/e1.1p3 /dev/etherd/e1.1p4 /dev/etherd/e1.1p5 /dev/etherd/e1.1p6 /dev/etherd/e1.1p7 /dev/etherd/e1.1p8 /dev/etherd/e1.1p9
  Writing physical volume data to disk "/dev/etherd/e1.1p1"
  Writing physical volume data to disk "/dev/etherd/e1.1p10"
  Writing physical volume data to disk "/dev/etherd/e1.1p2"
  Writing physical volume data to disk "/dev/etherd/e1.1p3"
  Writing physical volume data to disk "/dev/etherd/e1.1p4"
  Writing physical volume data to disk "/dev/etherd/e1.1p5"
  Writing physical volume data to disk "/dev/etherd/e1.1p6"
  Writing physical volume data to disk "/dev/etherd/e1.1p7"
  Writing physical volume data to disk "/dev/etherd/e1.1p8"
  Writing physical volume data to disk "/dev/etherd/e1.1p9"
hayes-01: vgcreate -s 1K raid_sanity /dev/etherd/e1.1p1 /dev/etherd/e1.1p10 /dev/etherd/e1.1p2 /dev/etherd/e1.1p3 /dev/etherd/e1.1p4 /dev/etherd/e1.1p5 /dev/etherd/e1.1p6 /dev/etherd/e1.1p7 /dev/etherd/e1.1p8 /dev/etherd/e1.1p9

[root@hayes-01 ~]# lvcreate --type raid1 -m1 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Logical volume "raid_on_1Kextent_vg" created

[root@hayes-01 ~]# lvs -a -o +devices
  LV                             VG          Attr     LSize  Copy%  Devices
  raid_on_1Kextent_vg            raid_sanity rwi-a-m- 60.00m 100.00 raid_on_1Kextent_vg_rimage_0(0),raid_on_1Kextent_vg_rimage_1(0)
  [raid_on_1Kextent_vg_rimage_0] raid_sanity iwi-aor- 60.00m        /dev/etherd/e1.1p1(1)
  [raid_on_1Kextent_vg_rimage_1] raid_sanity iwi-aor- 60.00m        /dev/etherd/e1.1p10(1)
  [raid_on_1Kextent_vg_rmeta_0]  raid_sanity ewi-aor-  1.00k        /dev/etherd/e1.1p1(0)
  [raid_on_1Kextent_vg_rmeta_1]  raid_sanity ewi-aor-  1.00k        /dev/etherd/e1.1p10(0)

[root@hayes-01 ~]# lvremove raid_sanity
Do you really want to remove active logical volume raid_on_1Kextent_vg? [y/n]: y
  Logical volume "raid_on_1Kextent_vg" successfully removed

[root@hayes-01 ~]# lvcreate --type raid4 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Using default stripesize 64.00 KiB
  Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB
  device-mapper: reload ioctl on  failed: Invalid argument
  Failed to activate new LV.

[root@hayes-01 ~]# lvcreate --type raid5 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Using default stripesize 64.00 KiB
  Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB
  device-mapper: reload ioctl on  failed: Invalid argument
  Failed to activate new LV.

[root@hayes-01 ~]# lvcreate --type raid6 -i3 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Using default stripesize 64.00 KiB
  Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB
  device-mapper: reload ioctl on  failed: Invalid argument
  Failed to activate new LV.

device-mapper: table: 253:9: raid: Chunk size value is too small
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:9: raid: Chunk size value is too small
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:13: raid: Chunk size value is too small
device-mapper: ioctl: error adding target to table


Version-Release number of selected component (if applicable):
2.6.32-278.el6.x86_64
lvm2-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
lvm2-libs-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
lvm2-cluster-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-libs-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-event-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-event-libs-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
cmirror-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012

Comment 1 RHEL Program Management 2012-07-10 06:01:42 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 2 RHEL Program Management 2012-07-10 23:59:11 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 3 Corey Marthaler 2012-11-01 19:57:40 UTC
Add raid10 to this mix.

[root@taft-01 ~]# lvcreate --type raid10 -i 3 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Using default stripesize 64.00 KiB
  Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB
  device-mapper: reload ioctl on  failed: Invalid argument
  Failed to activate new LV.

Comment 6 Alasdair Kergon 2013-10-08 12:54:23 UTC
Failure on the reload ioctl?  That's never meant to happen when it can be detected in advance and prevented.

Comment 9 Jonathan Earl Brassow 2014-07-23 02:42:58 UTC
The minimum stripe size for RAID targets is 4kiB:
linux/drivers/md/dm-raid.c:
        } else if (!is_power_of_2(value)) {
                rs->ti->error = "Chunk size must be a power of 2";
                return -EINVAL;
        } else if (value < 8) {
                rs->ti->error = "Chunk size value is too small";
                return -EINVAL;
        }

Maximum value for stripe size is PE size in LVM.  If PE size is less that 4kiB, then there is a problem for RAID stripe LVs.

Does there need to be this restriction in LVM?  See lvm2/lib/metadata/lv_manip.c:_validate_stripesize().  If the restriction can be lifted in LVM, RAID with PE sizes < 4kiB would work.  (Is this even worth fixing?  It should at least be caught before the ioctl in LVM - but where?)

Comment 10 Jonathan Earl Brassow 2014-08-16 02:16:35 UTC
Fix committed upstream:
commit 4d45302e25f5fe1251bdd8f2c49c4a75a4a31d2e
Author: Jonathan Brassow <jbrassow>
Date:   Fri Aug 15 21:15:34 2014 -0500

    RAID: Fail RAID4/5/6 creation if PE size is less than STRIPE_SIZE_MIN
    
    The maximum stripe size is equal to the volume group PE size.  If that
    size falls below the STRIPE_SIZE_MIN, the creation of RAID 4/5/6 volumes
    becomes impossible.  (The kernel will fail to load a RAID 4/5/6 mapping
    table with a stripe size less than STRIPE_SIZE_MIN.)  So, we report an
    error if it is attempted.
    
    This is very rare because reducing the PE size down that far limits the
    size of the PV below that of modern devices.

Comment 11 Jonathan Earl Brassow 2014-08-16 02:20:25 UTC
[root@bp-01 lvm2]# for i in 4 5 6; do lvcreate --type raid$i -L 500M -n lv vg; done
  The extent size in volume group vg is too small to support striped RAID volumes.
  The extent size in volume group vg is too small to support striped RAID volumes.
  The extent size in volume group vg is too small to support striped RAID volumes.

Comment 13 Nenad Peric 2014-08-20 11:33:56 UTC
I can still see the same issue it seems:

[root@virt-065 ~]# lvcreate --type raid4 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Using default stripesize 64.00 KiB
  Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB.
  Error locking on node virt-065: device-mapper: reload ioctl on  failed: Invalid argument
  Failed to activate new LV.
[root@virt-065 ~]# lvcreate --type raid5 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Using default stripesize 64.00 KiB
  Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB.
  Error locking on node virt-065: device-mapper: create ioctl on raid_sanity-raid_on_1Kextent_vg_rmeta_0 failed: Device or resource busy
  Failed to activate raid_sanity/raid_on_1Kextent_vg_rmeta_0 for clearing
[root@virt-065 ~]# dmsetup ls
raid_sanity-raid_on_1Kextent_vg_rmeta_0	(253:2)
raid_sanity-raid_on_1Kextent_vg_rimage_2	(253:7)
raid_sanity-raid_on_1Kextent_vg_rimage_1	(253:5)
raid_sanity-raid_on_1Kextent_vg_rimage_0	(253:3)
vg_virt065-lv_swap	(253:1)
raid_sanity-raid_on_1Kextent_vg_rmeta_2	(253:6)
vg_virt065-lv_root	(253:0)
raid_sanity-raid_on_1Kextent_vg_rmeta_1	(253:4)
[root@virt-065 ~]# lvs -a
  LV                  VG          Attr       LSize Data%  Meta%  Move Log Cpy%Sync Convert
  raid_on_1Kextent_vg raid_sanity rwi---r---    0                                         
  raid_on_1Kextent_vg_rmeta_0 raid_sanity ewi---r--- 1.00k                                        
  raid_on_1Kextent_vg_rmeta_1 raid_sanity ewi---r--- 1.00k                                        
  raid_on_1Kextent_vg_rmeta_2 raid_sanity ewi---r--- 1.00k                                        
  [raid_on_1Kextent_vg_rimage_0] raid_sanity Iwi---r--- 30.00m                                        
  [raid_on_1Kextent_vg_rimage_1] raid_sanity Iwi---r--- 30.00m                                        
  [raid_on_1Kextent_vg_rimage_2] raid_sanity Iwi---r--- 30.00m                                        
  lv_root                        vg_virt065  -wi-ao----  6.71g                                        
  lv_swap                        vg_virt065  -wi-ao---- 816.00m                           


If I try to remove these devices now I get errors:

[root@virt-065 ~]# lvremove raid_sanity
  Logical volume "raid_on_1Kextent_vg" successfully removed
  Can't remove logical volume raid_on_1Kextent_vg_rmeta_0 used as RAID device
  Can't remove logical volume raid_on_1Kextent_vg_rmeta_1 used as RAID device
  Can't remove logical volume raid_on_1Kextent_vg_rmeta_2 used as RAID device
[root@virt-065 ~]# lvs
  LV                          VG          Attr       LSize Data%  Meta%  Move Log Cpy%Sync Convert
  raid_on_1Kextent_vg_rmeta_0 raid_sanity -wi------- 1.00k                                        
  raid_on_1Kextent_vg_rmeta_1 raid_sanity -wi------- 1.00k                                        
  raid_on_1Kextent_vg_rmeta_2 raid_sanity -wi------- 1.00k                                        
  lv_root                     vg_virt065  -wi-ao---- 6.71g                                        
  lv_swap                     vg_virt065  -wi-ao---- 816.00m                                        
[root@virt-065 ~]# dmsetup ls
raid_sanity-raid_on_1Kextent_vg_rmeta_0	(253:2)
raid_sanity-raid_on_1Kextent_vg_rimage_2	(253:7)
raid_sanity-raid_on_1Kextent_vg_rimage_1	(253:5)
raid_sanity-raid_on_1Kextent_vg_rimage_0	(253:3)
vg_virt065-lv_swap	(253:1)
raid_sanity-raid_on_1Kextent_vg_rmeta_2	(253:6)
vg_virt065-lv_root	(253:0)
raid_sanity-raid_on_1Kextent_vg_rmeta_1	(253:4)


[root@virt-065 ~]# lvremove -ff raid_sanity
  Logical volume "raid_on_1Kextent_vg_rmeta_0" successfully removed
  Logical volume "raid_on_1Kextent_vg_rmeta_1" successfully removed
  Logical volume "raid_on_1Kextent_vg_rmeta_2" successfully removed
[root@virt-065 ~]# dmsetup ls
raid_sanity-raid_on_1Kextent_vg_rmeta_0	(253:2)
raid_sanity-raid_on_1Kextent_vg_rimage_2	(253:7)
raid_sanity-raid_on_1Kextent_vg_rimage_1	(253:5)
raid_sanity-raid_on_1Kextent_vg_rimage_0	(253:3)
vg_virt065-lv_swap	(253:1)
raid_sanity-raid_on_1Kextent_vg_rmeta_2	(253:6)
vg_virt065-lv_root	(253:0)
raid_sanity-raid_on_1Kextent_vg_rmeta_1	(253:4)

[root@virt-065 ~]# lvs
  LV      VG         Attr       LSize Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root vg_virt065 -wi-ao---- 6.71g                                        
  lv_swap vg_virt065 -wi-ao---- 816.00m                                        

Now I have a "dirty" device mapper as well. 
[root@virt-065 ~]# vgs
  VG          #PV #LV #SN Attr   VSize   VFree  
  raid_sanity   8   0   0 wz--n- 119.98g 119.90g
  vg_virt065    1   2   0 wz--n-   7.51g      0 
[root@virt-065 ~]# lvs -a
  LV                             VG          Attr       LSize  Data%  Meta%  Move Log Cpy%Sync Convert
  [raid_on_1Kextent_vg_rimage_0] raid_sanity -wi------- 30.00m                                        
  [raid_on_1Kextent_vg_rimage_1] raid_sanity -wi------- 30.00m                                        
  [raid_on_1Kextent_vg_rimage_2] raid_sanity -wi------- 30.00m                                        
  lv_root                        vg_virt065  -wi-ao----  6.71g                                        
  lv_swap                        vg_virt065  -wi-ao---- 816.00m                                

This is all on:
lvm2-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
lvm2-libs-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
lvm2-cluster-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
udev-147-2.57.el6    BUILT: Thu Jul 24 15:48:47 CEST 2014
device-mapper-1.02.88-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
device-mapper-libs-1.02.88-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
device-mapper-event-1.02.88-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
device-mapper-event-libs-1.02.88-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 15:43:06 CEST 2014
cmirror-2.02.109-1.el6    BUILT: Tue Aug  5 17:36:23 CEST 2014

Comment 14 Nenad Peric 2014-08-20 11:57:40 UTC
Did not see that the new packages are actually not in the nightly builds I was using. 
Will re-test with new packages.

Comment 15 Nenad Peric 2014-08-20 12:26:21 UTC
[root@virt-065 ~]# vgcreate -s 1K raid_sanity /dev/sd{b..i}1
  Clustered volume group "raid_sanity" successfully created
[root@virt-065 ~]# lvcreate --type raid4 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Using default stripesize 64.00 KiB
  The extent size in volume group raid_sanity is too small to support striped RAID volumes.
[root@virt-065 ~]# lvcreate --type raid5 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity
  Using default stripesize 64.00 KiB
  The extent size in volume group raid_sanity is too small to support striped RAID volumes.


[root@virt-065 ~]# lvcreate --type raid1 -m 1 -n radi_lv -L5G raid_sanity
  Logical volume "radi_lv" created



marking VERIFIED with:

lvm2-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
lvm2-libs-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
lvm2-cluster-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
udev-147-2.57.el6    BUILT: Thu Jul 24 15:48:47 CEST 2014
device-mapper-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-libs-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-event-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-event-libs-1.02.88-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 15:43:06 CEST 2014
cmirror-2.02.109-2.el6    BUILT: Tue Aug 19 16:32:25 CEST 2014

Comment 16 errata-xmlrpc 2014-10-14 08:23:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1387.html


Note You need to log in before you can comment on or make changes to this bug.