RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1404007 - LVM RAID: creating striped RAID types can ignore '-R|--regionsize' argument when using an odd -i stripe number
Summary: LVM RAID: creating striped RAID types can ignore '-R|--regionsize' argument w...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: 7.5
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1469559
TreeView+ depends on / blocked
 
Reported: 2016-12-12 20:26 UTC by Corey Marthaler
Modified: 2021-09-03 12:38 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.175-2.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 15:18:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0853 0 None None None 2018-04-10 15:19:55 UTC

Description Corey Marthaler 2016-12-12 20:26:29 UTC
Description of problem:
After adding a regression test case for lvconvert bug 1394427, I added one for lvcreate as well and learned that the striped region sizes over 8m are ignored for raid4,5,6 as well. Is this expected behavior?


RAID1 
[root@host-082 ~]# lvcreate  --type raid1 -n region_check.4.00m -L 1G -R 4.00m raid_sanity
  Logical volume "region_check.4.00m" created.
[root@host-082 ~]# lvcreate  --type raid1 -n region_check.16.00m -L 1G -R 16.00m raid_sanity
  Logical volume "region_check.16.00m" created.
[root@host-082 ~]# lvcreate  --type raid1 -n region_check.32.00m -L 1G -R 32.00m raid_sanity
  Logical volume "region_check.32.00m" created.
[root@host-082 ~]# lvcreate  --type raid1 -n region_check.64.00m -L 1G -R 64.00m raid_sanity
  Logical volume "region_check.64.00m" created.
[root@host-082 ~]# lvcreate  --type raid1 -n region_check.128.00m -L 1G -R 128.00m raid_sanity
  Logical volume "region_check.128.00m" created.
[root@host-082 ~]# lvcreate  --type raid1 -n region_check.256.00m -L 1G -R 256.00m raid_sanity
  Logical volume "region_check.256.00m" created.
[root@host-082 ~]# lvcreate  --type raid1 -n region_check.512.00m -L 1G -R 512.00m raid_sanity
  Logical volume "region_check.512.00m" created.

[root@host-082 ~]# lvs -o lv_name,regionsize
  LV                   Region 
  region_check.128.00m 128.00m
  region_check.16.00m   16.00m
  region_check.256.00m 256.00m
  region_check.32.00m   32.00m
  region_check.4.00m     4.00m
  region_check.512.00m 512.00m
  region_check.64.00m   64.00m


RAID4
[root@host-082 ~]# lvcreate  --type raid4 -n region_check.4.00m -L 1G -R 4.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Logical volume "region_check.4.00m" created.
[root@host-082 ~]# lvcreate  --type raid4 -n region_check.16.00m -L 1G -R 16.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.16.00m" created.
[root@host-082 ~]# lvcreate  --type raid4 -n region_check.32.00m -L 1G -R 32.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.32.00m" created.
[root@host-082 ~]# lvcreate  --type raid4 -n region_check.64.00m -L 1G -R 64.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.64.00m" created.
[root@host-082 ~]# lvcreate  --type raid4 -n region_check.128.00m -L 1G -R 128.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.128.00m" created.
[root@host-082 ~]# lvcreate  --type raid4 -n region_check.256.00m -L 1G -R 256.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.256.00m" created.
[root@host-082 ~]# lvcreate  --type raid4 -n region_check.512.00m -L 1G -R 512.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.512.00m" created.

[root@host-082 ~]# lvs -o lv_name,regionsize
  LV                   Region
  region_check.128.00m  8.00m
  region_check.16.00m   8.00m
  region_check.256.00m  8.00m
  region_check.32.00m   8.00m
  region_check.4.00m    4.00m
  region_check.512.00m  8.00m
  region_check.64.00m   8.00m


RAID5
[root@host-082 ~]# lvcreate  --type raid5 -n region_check.4.00m -L 1G -R 4.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Logical volume "region_check.4.00m" created.
[root@host-082 ~]# lvcreate  --type raid5 -n region_check.16.00m -L 1G -R 16.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.16.00m" created.
[root@host-082 ~]# lvcreate  --type raid5 -n region_check.32.00m -L 1G -R 32.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.32.00m" created.
[root@host-082 ~]# lvcreate  --type raid5 -n region_check.64.00m -L 1G -R 64.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.64.00m" created.
[root@host-082 ~]# lvcreate  --type raid5 -n region_check.128.00m -L 1G -R 128.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.128.00m" created.
[root@host-082 ~]# lvcreate  --type raid5 -n region_check.256.00m -L 1G -R 256.00m raid_sanity 
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.256.00m" created.
[root@host-082 ~]# lvcreate  --type raid5 -n region_check.512.00m -L 1G -R 512.00m raid_sanity
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size 1.05 GiB (270 extents).
  Using reduced mirror region size of 16384 sectors.
  Logical volume "region_check.512.00m" created.

[root@host-082 ~]# lvs -o lv_name,regionsize
  LV                   Region
  region_check.128.00m  8.00m
  region_check.16.00m   8.00m
  region_check.256.00m  8.00m
  region_check.32.00m   8.00m
  region_check.4.00m    4.00m
  region_check.512.00m  8.00m
  region_check.64.00m   8.00m


RAID10 
[root@host-082 ~]# lvcreate  --type raid10 -n region_check.4.00m -L 1G -R 4.00m raid_sanity
  Logical volume "region_check.4.00m" created.
[root@host-082 ~]# lvcreate  --type raid10 -n region_check.16.00m -L 1G -R 16.00m raid_sanity
  Logical volume "region_check.16.00m" created.
[root@host-082 ~]# lvcreate  --type raid10 -n region_check.32.00m -L 1G -R 32.00m raid_sanity
  Logical volume "region_check.32.00m" created.
[root@host-082 ~]# lvcreate  --type raid10 -n region_check.64.00m -L 1G -R 64.00m raid_sanity
  Logical volume "region_check.64.00m" created.
[root@host-082 ~]# lvcreate  --type raid10 -n region_check.128.00m -L 1G -R 128.00m raid_sanity
  Logical volume "region_check.128.00m" created.
[root@host-082 ~]# lvcreate  --type raid10 -n region_check.256.00m -L 1G -R 256.00m raid_sanity
  Logical volume "region_check.256.00m" created.
[root@host-082 ~]# lvcreate  --type raid10 -n region_check.512.00m -L 1G -R 512.00m raid_sanity
  Logical volume "region_check.512.00m" created.
[root@host-082 ~]# lvs -o lv_name,regionsize
  LV                   Region 
  region_check.128.00m 128.00m
  region_check.16.00m   16.00m
  region_check.256.00m 256.00m
  region_check.32.00m   32.00m
  region_check.4.00m     4.00m
  region_check.512.00m 512.00m
  region_check.64.00m   64.00m

  

Version-Release number of selected component (if applicable):

### Technically this was tested on 6.9, but I have to assume this wont be fixed in rhel6, yet has a chance of being fixed in rhel7.

2.6.32-674.el6.x86_64

lvm2-2.02.143-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
lvm2-libs-2.02.143-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
lvm2-cluster-2.02.143-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
udev-147-2.73.el6_8.2    BUILT: Tue Aug 30 08:17:19 CDT 2016
device-mapper-1.02.117-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
device-mapper-libs-1.02.117-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016
device-mapper-event-1.02.117-10.el6    BUILT: Thu Nov 24 03:58:43 CST 2016

Comment 2 Heinz Mauelshagen 2017-03-08 13:42:48 UTC
The behaviour is intentional. adjusted_mirror_region_size() enforces the LV size to be a multiple of region_size.

Comment 3 Corey Marthaler 2017-05-19 18:14:53 UTC
...but in all of these cases, isn't the region size *already* a multiple of the LV size? Also, if the lvconvert can work properly without adjusting the LV size shouldn't the create be able to do it? It not, maybe we could provide a message like: "Using reduced mirror region size of X, use 'lvconvert -R' to enforce the supplied regionsize"


[root@host-073 ~]# lvcreate ---type raid6_la_6 -R 8192.00k -i 3 -n LV1 -L 4G VG
  Using default stripesize 64.00 KiB.
  Rounding size 4.00 GiB (1024 extents) up to stripe boundary size <4.01 GiB(1026 extents).
  Logical volume "LV1" created.
[root@host-073 ~]# lvcreate ---type raid6_la_6 -R 8192.00k -i 3 -n LV2 -L 8G VG
  Using default stripesize 64.00 KiB.
  Rounding size 8.00 GiB (2048 extents) up to stripe boundary size 8.00 GiB(2049 extents).
  Using reduced mirror region size of 4.00 MiB
  Logical volume "LV2" created.
[root@host-073 ~]# lvcreate ---type raid6_la_6 -R 8192.00k -i 3 -n LV3 -L 800M VG
  Using default stripesize 64.00 KiB.
  Rounding size 800.00 MiB (200 extents) up to stripe boundary size 804.00 MiB(201 extents).
  Using reduced mirror region size of 4.00 MiB
  Logical volume "LV3" created.
[root@host-073 ~]# lvcreate ---type raid6_la_6 -R 8M -i 3 -n LV4 -L 800M VG
  Using default stripesize 64.00 KiB.
  Rounding size 800.00 MiB (200 extents) up to stripe boundary size 804.00 MiB(201 extents).
  Using reduced mirror region size of 4.00 MiB
  Logical volume "LV4" created.

[root@host-073 ~]# lvs -o lvname,segtype,regionsize
  LV   Type       Region
  LV1  raid6_la_6  8.00m
  LV2  raid6_la_6  4.00m
  LV3  raid6_la_6  4.00m
  LV4  raid6_la_6  4.00m

[root@host-073 ~]# lvconvert -R 4096.00k VG/LV1
Do you really want to change the region_size 8.00 MiB of LV VG/LV1 to 4.00 MiB? [y/n]: y
  Changed region size on RAID LV VG/LV1 to 4.00 MiB.
[root@host-073 ~]# lvconvert -R 8192.00k VG/LV2
Do you really want to change the region_size 4.00 MiB of LV VG/LV2 to 8.00 MiB? [y/n]: y
  Changed region size on RAID LV VG/LV2 to 8.00 MiB.
[root@host-073 ~]# lvconvert -R 8192.00k VG/LV3
Do you really want to change the region_size 4.00 MiB of LV VG/LV3 to 8.00 MiB? [y/n]: y
  Changed region size on RAID LV VG/LV3 to 8.00 MiB.
[root@host-073 ~]# lvconvert -R 8192.00k VG/LV4
Do you really want to change the region_size 4.00 MiB of LV VG/LV4 to 8.00 MiB? [y/n]: y
  Changed region size on RAID LV VG/LV4 to 8.00 MiB.

[root@host-073 ~]# lvs -o lvname,segtype,regionsize
  LV   Type       Region
  LV1  raid6_la_6  4.00m
  LV2  raid6_la_6  8.00m
  LV3  raid6_la_6  8.00m
  LV4  raid6_la_6  8.00m

Comment 4 Heinz Mauelshagen 2017-07-07 13:46:25 UTC
It is actually the LV size requested to be a multiple of region_size.
I have to check if the "mirror" target which used to have that consttraint actually needs it any more.

Comment 7 Heinz Mauelshagen 2017-09-26 13:57:25 UTC
Tested with 2.02.171(2)-RHEL7 -> WFM

Comment 8 Corey Marthaler 2017-09-26 15:40:27 UTC
Please post your testing results when closing a bug. Also, the latest and current released 7.4 version of lvm was lvm2-2.02.171-8, not lvm2-2.02.171-2.

Our testing still shows the "failing" behavior, ie in all these examples, the region size and lv sizes are the same, so definitely a multiple of. If that behavior is expected then please let us know and we'll change the test's expected results.


SCENARIO (raid4) - [raid_regionsize_create_check]
Create a raids using a non default region sizes, then verify it's honored

lvcreate   --type raid4 -i 3 -n region_check.256.00k -L 1G -R 256.00k raid_sanity
lvcreate   --type raid4 -i 3 -n region_check.512.00k -L 1G -R 512.00k raid_sanity
lvcreate   --type raid4 -i 3 -n region_check.1.00m -L 1G -R 1.00m raid_sanity
lvcreate   --type raid4 -i 3 -n region_check.4.00m -L 1G -R 4.00m raid_sanity
lvcreate   --type raid4 -i 3 -n region_check.16.00m -L 1G -R 16.00m raid_sanity
current region size doesn't match size given
8.00m ne 16.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid4 -i 3 -n region_check.32.00m -L 1G -R 32.00m raid_sanity
current region size doesn't match size given
8.00m ne 32.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid4 -i 3 -n region_check.64.00m -L 1G -R 64.00m raid_sanity
current region size doesn't match size given
8.00m ne 64.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid4 -i 3 -n region_check.128.00m -L 1G -R 128.00m raid_sanity
current region size doesn't match size given
8.00m ne 128.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid4 -i 3 -n region_check.256.00m -L 1G -R 256.00m raid_sanity
current region size doesn't match size given
8.00m ne 256.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid4 -i 3 -n region_check.512.00m -L 1G -R 512.00m raid_sanity
current region size doesn't match size given
8.00m ne 512.00m
This is bug 1404007, remove hack if this is ever fixed.



SCENARIO (raid5) - [raid_regionsize_create_check]
Create a raids using a non default region sizes, then verify it's honored

lvcreate   --type raid5 -i 3 -n region_check.256.00k -L 1G -R 256.00k raid_sanity
lvcreate   --type raid5 -i 3 -n region_check.512.00k -L 1G -R 512.00k raid_sanity
lvcreate   --type raid5 -i 3 -n region_check.1.00m -L 1G -R 1.00m raid_sanity
lvcreate   --type raid5 -i 3 -n region_check.4.00m -L 1G -R 4.00m raid_sanity
lvcreate   --type raid5 -i 3 -n region_check.16.00m -L 1G -R 16.00m raid_sanity
current region size doesn't match size given
8.00m ne 16.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid5 -i 3 -n region_check.32.00m -L 1G -R 32.00m raid_sanity
current region size doesn't match size given
8.00m ne 32.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid5 -i 3 -n region_check.64.00m -L 1G -R 64.00m raid_sanity
current region size doesn't match size given
8.00m ne 64.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid5 -i 3 -n region_check.128.00m -L 1G -R 128.00m raid_sanity
current region size doesn't match size given
8.00m ne 128.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid5 -i 3 -n region_check.256.00m -L 1G -R 256.00m raid_sanity
current region size doesn't match size given
8.00m ne 256.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid5 -i 3 -n region_check.512.00m -L 1G -R 512.00m raid_sanity
current region size doesn't match size given
8.00m ne 512.00m
This is bug 1404007, remove hack if this is ever fixed.



SCENARIO (raid6) - [raid_regionsize_create_check]
Create a raids using a non default region sizes, then verify it's honored

lvcreate   --type raid6 -i 3 -n region_check.256.00k -L 1G -R 256.00k raid_sanity
lvcreate   --type raid6 -i 3 -n region_check.512.00k -L 1G -R 512.00k raid_sanity
lvcreate   --type raid6 -i 3 -n region_check.1.00m -L 1G -R 1.00m raid_sanity
lvcreate   --type raid6 -i 3 -n region_check.4.00m -L 1G -R 4.00m raid_sanity
lvcreate   --type raid6 -i 3 -n region_check.16.00m -L 1G -R 16.00m raid_sanity
current region size doesn't match size given
8.00m ne 16.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid6 -i 3 -n region_check.32.00m -L 1G -R 32.00m raid_sanity
current region size doesn't match size given
8.00m ne 32.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid6 -i 3 -n region_check.64.00m -L 1G -R 64.00m raid_sanity
current region size doesn't match size given
8.00m ne 64.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid6 -i 3 -n region_check.128.00m -L 1G -R 128.00m raid_sanity
current region size doesn't match size given
8.00m ne 128.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid6 -i 3 -n region_check.256.00m -L 1G -R 256.00m raid_sanity
current region size doesn't match size given
8.00m ne 256.00m
This is bug 1404007, remove hack if this is ever fixed.
lvcreate   --type raid6 -i 3 -n region_check.512.00m -L 1G -R 512.00m raid_sanity
current region size doesn't match size given
8.00m ne 512.00m
This is bug 1404007, remove hack if this is ever fixed.

Comment 9 Heinz Mauelshagen 2017-09-26 17:36:53 UTC
Corey,

I'm saying it worked in 2.02.171-2. Are you saying you see a regression in 2.02.171-8 then?

Tests:

for r in 4 8 16 32 64 128 256;do lvcreate -y --ty raid5 -L1g --nosync -nr -R${r}M nvm;lvs -olvname,segtype,regionsize nvm;lvremove -y nvm/r;done
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "r" created.
  LV   Type  Region
  r    raid5  4.00m
  Logical volume "r" successfully removed
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "r" created.
  LV   Type  Region
  r    raid5  8.00m
  Logical volume "r" successfully removed
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "r" created.
  LV   Type  Region
  r    raid5 16.00m
  Logical volume "r" successfully removed
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "r" created.
  LV   Type  Region
  r    raid5 32.00m
  Logical volume "r" successfully removed
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "r" created.
  LV   Type  Region
  r    raid5 64.00m
  Logical volume "r" successfully removed
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "r" created.
  LV   Type  Region 
  r    raid5 128.00m
  Logical volume "r" successfully removed
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "r" created.
  LV   Type  Region 
  r    raid5 256.00m
  Logical volume "r" successfully removed

lvcreate --ty raid5 -L1G -R32M --nosync -y -nr nvm;for r in 4 8 16 32 64 128 256;do lvconvert -y -R${r}M nvm/r;lvs -olvname,segtype,regionsize nvm;done
  Using default stripesize 64.00 KiB.
  Logical Volume "r" already exists in volume group "nvm"
  Changed region size on RAID LV nvm/r to 4.00 MiB.
  LV   Type  Region
  r    raid5  4.00m
  Changed region size on RAID LV nvm/r to 8.00 MiB.
  LV   Type  Region
  r    raid5  8.00m
  Changed region size on RAID LV nvm/r to 16.00 MiB.
  LV   Type  Region
  r    raid5 16.00m
  Changed region size on RAID LV nvm/r to 32.00 MiB.
  LV   Type  Region
  r    raid5 32.00m
  Changed region size on RAID LV nvm/r to 64.00 MiB.
  LV   Type  Region
  r    raid5 64.00m
  Changed region size on RAID LV nvm/r to 128.00 MiB.
  LV   Type  Region 
  r    raid5 128.00m
  Changed region size on RAID LV nvm/r to 256.00 MiB.
  LV   Type  Region 
  r    raid5 256.00m

Comment 10 Corey Marthaler 2017-09-27 16:50:16 UTC
There is no difference here in behavior wrt region size between 171-2 and 171-8. 

The only difference that matters between our two scripts is the use of the "-i|--stripes" argument. You'll see this behavior when using an odd numbered stripe argument. And like mentioned in comment #0 and comment #3, you'll see a message when it's happening "Using reduced mirror region size of ...". 

An lvconvert can make the region size work, so again, either 1. should the lvcreate just be able to this initially, or 2. should we also provide a message to the user to use lvconvert if this is really the desired behavior.


[root@host-116 ~]# lvcreate   --type raid5 -n region_check.32.00m_A -L 1g --nosync -R 32.00m raid_sanity
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "region_check.32.00m_A" created.

[root@host-116 ~]# lvcreate   --type raid5 -n region_check.32.00m_2 -i 2 -L 1g --nosync -R 32.00m raid_sanity
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "region_check.32.00m_2" created.

[root@host-116 ~]# lvcreate   --type raid5 -n region_check.32.00m_3 -i 3 -L 1g --nosync -R 32.00m raid_sanity
  Using default stripesize 64.00 KiB.
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size <1.01 GiB(258 extents).
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Using reduced mirror region size of 8.00 MiB
  Logical volume "region_check.32.00m_3" created.

[root@host-116 ~]# lvcreate   --type raid5 -n region_check.32.00m_4 -i 4 -L 1g --nosync -R 32.00m raid_sanity
  Using default stripesize 64.00 KiB.
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Logical volume "region_check.32.00m_4" created.

[root@host-116 ~]# lvcreate   --type raid5 -n region_check.32.00m_5 -i 5 -L 1g --nosync -R 32.00m raid_sanity
  Using default stripesize 64.00 KiB.
  Rounding size 1.00 GiB (256 extents) up to stripe boundary size <1.02 GiB(260 extents).
  WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
  Using reduced mirror region size of 16.00 MiB
  Logical volume "region_check.32.00m_5" created.

[root@host-116 ~]# lvs -olvname,segtype,regionsize raid_sanity
  LV                    Type  Region
  region_check.32.00m_2 raid5 32.00m
  region_check.32.00m_3 raid5  8.00m
  region_check.32.00m_4 raid5 32.00m
  region_check.32.00m_5 raid5 16.00m
  region_check.32.00m_A raid5 32.00m



# Here's an lvconvert ultimately enforcing the desired region sizes:
 
[root@host-116 ~]# lvconvert -R 32.00m raid_sanity/region_check.32.00m_3
Do you really want to change the region_size 8.00 MiB of LV raid_sanity/region_check.32.00m_3 to 32.00 MiB? [y/n]: y
  Changed region size on RAID LV raid_sanity/region_check.32.00m_3 to 32.00 MiB.
[root@host-116 ~]# lvconvert -R 32.00m raid_sanity/region_check.32.00m_5
Do you really want to change the region_size 16.00 MiB of LV raid_sanity/region_check.32.00m_5 to 32.00 MiB? [y/n]: y
  Changed region size on RAID LV raid_sanity/region_check.32.00m_5 to 32.00 MiB.

[root@host-116 ~]# lvs -olvname,segtype,regionsize raid_sanity
  LV                    Type  Region
  region_check.32.00m_2 raid5 32.00m
  region_check.32.00m_3 raid5 32.00m
  region_check.32.00m_4 raid5 32.00m
  region_check.32.00m_5 raid5 32.00m
  region_check.32.00m_A raid5 32.00m

Comment 12 Heinz Mauelshagen 2017-10-09 12:36:12 UTC
(In reply to Corey Marthaler from comment #10)
> There is no difference here in behavior wrt region size between 171-2 and
> 171-8. 
> 
> The only difference that matters between our two scripts is the use of the
> "-i|--stripes" argument. You'll see this behavior when using an odd numbered
> stripe argument. And like mentioned in comment #0 and comment #3, you'll see
> a message when it's happening "Using reduced mirror region size of ...". 
> 

Got it, commit 5f13e33d541f7af77f586ac55edfed336ad8dcc1 posted to remove
"mirror" restrictions from "raid".

Comment 14 Corey Marthaler 2017-11-16 16:25:22 UTC
Fix verified for odd legged striped raid4|5|6 volumes in the latest rpms.

3.10.0-772.el7.x86_64

lvm2-2.02.176-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
lvm2-libs-2.02.176-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
lvm2-cluster-2.02.176-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
lvm2-lockd-2.02.176-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
lvm2-python-boom-0.8-4.el7    BUILT: Wed Nov 15 04:23:09 CST 2017
cmirror-2.02.176-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
device-mapper-1.02.145-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
device-mapper-libs-1.02.145-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
device-mapper-event-1.02.145-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
device-mapper-event-libs-1.02.145-4.el7    BUILT: Wed Nov 15 04:21:19 CST 2017
device-mapper-persistent-data-0.7.3-2.el7    BUILT: Tue Oct 10 04:00:07 CDT 2017




SCENARIO (raid4) - [raid_regionsize_create_check]
Create raids using non default region sizes, (and both odd and even stripe images where applicable based on type) then verify they're honored

lvcreate   --type raid4 -i 2 --nosync -n region_check.128.00k -L 1g -R 128.00k raid_sanity
128.00k eq 128.00k
lvcreate   --type raid4 -i 3 --nosync -n region_check.128.00k -L 1g -R 128.00k raid_sanity
128.00k eq 128.00k
lvcreate   --type raid4 -i 2 --nosync -n region_check.256.00k -L 1g -R 256.00k raid_sanity
256.00k eq 256.00k
lvcreate   --type raid4 -i 3 --nosync -n region_check.256.00k -L 1g -R 256.00k raid_sanity
256.00k eq 256.00k
lvcreate   --type raid4 -i 2 --nosync -n region_check.512.00k -L 1g -R 512.00k raid_sanity
512.00k eq 512.00k
lvcreate   --type raid4 -i 3 --nosync -n region_check.512.00k -L 1g -R 512.00k raid_sanity
512.00k eq 512.00k
lvcreate   --type raid4 -i 2 --nosync -n region_check.1.00m -L 1g -R 1.00m raid_sanity
1.00m eq 1.00m
lvcreate   --type raid4 -i 3 --nosync -n region_check.1.00m -L 1g -R 1.00m raid_sanity
1.00m eq 1.00m
lvcreate   --type raid4 -i 2 --nosync -n region_check.4.00m -L 1g -R 4.00m raid_sanity
4.00m eq 4.00m
lvcreate   --type raid4 -i 3 --nosync -n region_check.4.00m -L 1g -R 4.00m raid_sanity
4.00m eq 4.00m
lvcreate   --type raid4 -i 2 --nosync -n region_check.16.00m -L 1g -R 16.00m raid_sanity
16.00m eq 16.00m
lvcreate   --type raid4 -i 3 --nosync -n region_check.16.00m -L 1g -R 16.00m raid_sanity
16.00m eq 16.00m
lvcreate   --type raid4 -i 2 --nosync -n region_check.32.00m -L 1g -R 32.00m raid_sanity
32.00m eq 32.00m
lvcreate   --type raid4 -i 3 --nosync -n region_check.32.00m -L 1g -R 32.00m raid_sanity
32.00m eq 32.00m
lvcreate   --type raid4 -i 2 --nosync -n region_check.64.00m -L 1g -R 64.00m raid_sanity
64.00m eq 64.00m
lvcreate   --type raid4 -i 3 --nosync -n region_check.64.00m -L 1g -R 64.00m raid_sanity
64.00m eq 64.00m
lvcreate   --type raid4 -i 2 --nosync -n region_check.128.00m -L 1g -R 128.00m raid_sanity
128.00m eq 128.00m
lvcreate   --type raid4 -i 3 --nosync -n region_check.128.00m -L 1g -R 128.00m raid_sanity
128.00m eq 128.00m
lvcreate   --type raid4 -i 2 --nosync -n region_check.256.00m -L 1g -R 256.00m raid_sanity
256.00m eq 256.00m
lvcreate   --type raid4 -i 3 --nosync -n region_check.256.00m -L 1g -R 256.00m raid_sanity
256.00m eq 256.00m
lvcreate   --type raid4 -i 2 --nosync -n region_check.512.00m -L 1g -R 512.00m raid_sanity
512.00m eq 512.00m
lvcreate   --type raid4 -i 3 --nosync -n region_check.512.00m -L 1g -R 512.00m raid_sanity
512.00m eq 512.00m




SCENARIO (raid5) - [raid_regionsize_create_check]
Create raids using non default region sizes, (and both odd and even stripe images where applicable based on type) then verify they're honored

lvcreate   --type raid5 -i 2 --nosync -n region_check.128.00k -L 1g -R 128.00k raid_sanity
128.00k eq 128.00k
lvcreate   --type raid5 -i 3 --nosync -n region_check.128.00k -L 1g -R 128.00k raid_sanity
128.00k eq 128.00k
lvcreate   --type raid5 -i 2 --nosync -n region_check.256.00k -L 1g -R 256.00k raid_sanity
256.00k eq 256.00k
lvcreate   --type raid5 -i 3 --nosync -n region_check.256.00k -L 1g -R 256.00k raid_sanity
256.00k eq 256.00k
lvcreate   --type raid5 -i 2 --nosync -n region_check.512.00k -L 1g -R 512.00k raid_sanity
512.00k eq 512.00k
lvcreate   --type raid5 -i 3 --nosync -n region_check.512.00k -L 1g -R 512.00k raid_sanity
512.00k eq 512.00k
lvcreate   --type raid5 -i 2 --nosync -n region_check.1.00m -L 1g -R 1.00m raid_sanity
1.00m eq 1.00m
lvcreate   --type raid5 -i 3 --nosync -n region_check.1.00m -L 1g -R 1.00m raid_sanity
1.00m eq 1.00m
lvcreate   --type raid5 -i 2 --nosync -n region_check.4.00m -L 1g -R 4.00m raid_sanity
4.00m eq 4.00m
lvcreate   --type raid5 -i 3 --nosync -n region_check.4.00m -L 1g -R 4.00m raid_sanity
4.00m eq 4.00m
lvcreate   --type raid5 -i 2 --nosync -n region_check.16.00m -L 1g -R 16.00m raid_sanity
16.00m eq 16.00m
lvcreate   --type raid5 -i 3 --nosync -n region_check.16.00m -L 1g -R 16.00m raid_sanity
16.00m eq 16.00m
lvcreate   --type raid5 -i 2 --nosync -n region_check.32.00m -L 1g -R 32.00m raid_sanity
32.00m eq 32.00m
lvcreate   --type raid5 -i 3 --nosync -n region_check.32.00m -L 1g -R 32.00m raid_sanity
32.00m eq 32.00m
lvcreate   --type raid5 -i 2 --nosync -n region_check.64.00m -L 1g -R 64.00m raid_sanity
64.00m eq 64.00m
lvcreate   --type raid5 -i 3 --nosync -n region_check.64.00m -L 1g -R 64.00m raid_sanity
64.00m eq 64.00m
lvcreate   --type raid5 -i 2 --nosync -n region_check.128.00m -L 1g -R 128.00m raid_sanity
128.00m eq 128.00m
lvcreate   --type raid5 -i 3 --nosync -n region_check.128.00m -L 1g -R 128.00m raid_sanity
128.00m eq 128.00m
lvcreate   --type raid5 -i 2 --nosync -n region_check.256.00m -L 1g -R 256.00m raid_sanity
256.00m eq 256.00m
lvcreate   --type raid5 -i 3 --nosync -n region_check.256.00m -L 1g -R 256.00m raid_sanity
256.00m eq 256.00m
lvcreate   --type raid5 -i 2 --nosync -n region_check.512.00m -L 1g -R 512.00m raid_sanity
512.00m eq 512.00m
lvcreate   --type raid5 -i 3 --nosync -n region_check.512.00m -L 1g -R 512.00m raid_sanity
512.00m eq 512.00m





SCENARIO (raid6) - [raid_regionsize_create_check]
Create raids using non default region sizes, (and both odd and even stripe images where applicable based on type) then verify they're honored

lvcreate   --type raid6 -i 3 -n region_check.128.00k -L 1g -R 128.00k raid_sanity
128.00k eq 128.00k
lvcreate   --type raid6 -i 3 -n region_check.128.00k -L 1g -R 128.00k raid_sanity
128.00k eq 128.00k
lvcreate   --type raid6 -i 3 -n region_check.256.00k -L 1g -R 256.00k raid_sanity
256.00k eq 256.00k
lvcreate   --type raid6 -i 3 -n region_check.256.00k -L 1g -R 256.00k raid_sanity
256.00k eq 256.00k
lvcreate   --type raid6 -i 3 -n region_check.512.00k -L 1g -R 512.00k raid_sanity
512.00k eq 512.00k
lvcreate   --type raid6 -i 3 -n region_check.512.00k -L 1g -R 512.00k raid_sanity
512.00k eq 512.00k
lvcreate   --type raid6 -i 3 -n region_check.1.00m -L 1g -R 1.00m raid_sanity
1.00m eq 1.00m
lvcreate   --type raid6 -i 3 -n region_check.1.00m -L 1g -R 1.00m raid_sanity
1.00m eq 1.00m
lvcreate   --type raid6 -i 3 -n region_check.4.00m -L 1g -R 4.00m raid_sanity
4.00m eq 4.00m
lvcreate   --type raid6 -i 3 -n region_check.4.00m -L 1g -R 4.00m raid_sanity
4.00m eq 4.00m
lvcreate   --type raid6 -i 3 -n region_check.16.00m -L 1g -R 16.00m raid_sanity
16.00m eq 16.00m
lvcreate   --type raid6 -i 3 -n region_check.16.00m -L 1g -R 16.00m raid_sanity
16.00m eq 16.00m
lvcreate   --type raid6 -i 3 -n region_check.32.00m -L 1g -R 32.00m raid_sanity
32.00m eq 32.00m
lvcreate   --type raid6 -i 3 -n region_check.32.00m -L 1g -R 32.00m raid_sanity
32.00m eq 32.00m
lvcreate   --type raid6 -i 3 -n region_check.64.00m -L 1g -R 64.00m raid_sanity
64.00m eq 64.00m
lvcreate   --type raid6 -i 3 -n region_check.64.00m -L 1g -R 64.00m raid_sanity
64.00m eq 64.00m
lvcreate   --type raid6 -i 3 -n region_check.128.00m -L 1g -R 128.00m raid_sanity
128.00m eq 128.00m
lvcreate   --type raid6 -i 3 -n region_check.128.00m -L 1g -R 128.00m raid_sanity
128.00m eq 128.00m
lvcreate   --type raid6 -i 3 -n region_check.256.00m -L 1g -R 256.00m raid_sanity
256.00m eq 256.00m
lvcreate   --type raid6 -i 3 -n region_check.256.00m -L 1g -R 256.00m raid_sanity
256.00m eq 256.00m
lvcreate   --type raid6 -i 3 -n region_check.512.00m -L 1g -R 512.00m raid_sanity
512.00m eq 512.00m
lvcreate   --type raid6 -i 3 -n region_check.512.00m -L 1g -R 512.00m raid_sanity
512.00m eq 512.00m

Comment 17 errata-xmlrpc 2018-04-10 15:18:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0853


Note You need to log in before you can comment on or make changes to this bug.