Bug 1404007
| Summary: | LVM RAID: creating striped RAID types can ignore '-R|--regionsize' argument when using an odd -i stripe number | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> |
| lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | low | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, mcsontos, msnitzer, prajnoha, prockai, rbednar, zkabelac |
| Version: | 7.3 | Keywords: | Reopened |
| Target Milestone: | rc | ||
| Target Release: | 7.5 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.175-2.el7 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-04-10 15:18:32 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1469559 | ||
|
Description
Corey Marthaler
2016-12-12 20:26:29 UTC
The behaviour is intentional. adjusted_mirror_region_size() enforces the LV size to be a multiple of region_size. ...but in all of these cases, isn't the region size *already* a multiple of the LV size? Also, if the lvconvert can work properly without adjusting the LV size shouldn't the create be able to do it? It not, maybe we could provide a message like: "Using reduced mirror region size of X, use 'lvconvert -R' to enforce the supplied regionsize" [root@host-073 ~]# lvcreate ---type raid6_la_6 -R 8192.00k -i 3 -n LV1 -L 4G VG Using default stripesize 64.00 KiB. Rounding size 4.00 GiB (1024 extents) up to stripe boundary size <4.01 GiB(1026 extents). Logical volume "LV1" created. [root@host-073 ~]# lvcreate ---type raid6_la_6 -R 8192.00k -i 3 -n LV2 -L 8G VG Using default stripesize 64.00 KiB. Rounding size 8.00 GiB (2048 extents) up to stripe boundary size 8.00 GiB(2049 extents). Using reduced mirror region size of 4.00 MiB Logical volume "LV2" created. [root@host-073 ~]# lvcreate ---type raid6_la_6 -R 8192.00k -i 3 -n LV3 -L 800M VG Using default stripesize 64.00 KiB. Rounding size 800.00 MiB (200 extents) up to stripe boundary size 804.00 MiB(201 extents). Using reduced mirror region size of 4.00 MiB Logical volume "LV3" created. [root@host-073 ~]# lvcreate ---type raid6_la_6 -R 8M -i 3 -n LV4 -L 800M VG Using default stripesize 64.00 KiB. Rounding size 800.00 MiB (200 extents) up to stripe boundary size 804.00 MiB(201 extents). Using reduced mirror region size of 4.00 MiB Logical volume "LV4" created. [root@host-073 ~]# lvs -o lvname,segtype,regionsize LV Type Region LV1 raid6_la_6 8.00m LV2 raid6_la_6 4.00m LV3 raid6_la_6 4.00m LV4 raid6_la_6 4.00m [root@host-073 ~]# lvconvert -R 4096.00k VG/LV1 Do you really want to change the region_size 8.00 MiB of LV VG/LV1 to 4.00 MiB? [y/n]: y Changed region size on RAID LV VG/LV1 to 4.00 MiB. [root@host-073 ~]# lvconvert -R 8192.00k VG/LV2 Do you really want to change the region_size 4.00 MiB of LV VG/LV2 to 8.00 MiB? [y/n]: y Changed region size on RAID LV VG/LV2 to 8.00 MiB. [root@host-073 ~]# lvconvert -R 8192.00k VG/LV3 Do you really want to change the region_size 4.00 MiB of LV VG/LV3 to 8.00 MiB? [y/n]: y Changed region size on RAID LV VG/LV3 to 8.00 MiB. [root@host-073 ~]# lvconvert -R 8192.00k VG/LV4 Do you really want to change the region_size 4.00 MiB of LV VG/LV4 to 8.00 MiB? [y/n]: y Changed region size on RAID LV VG/LV4 to 8.00 MiB. [root@host-073 ~]# lvs -o lvname,segtype,regionsize LV Type Region LV1 raid6_la_6 4.00m LV2 raid6_la_6 8.00m LV3 raid6_la_6 8.00m LV4 raid6_la_6 8.00m It is actually the LV size requested to be a multiple of region_size. I have to check if the "mirror" target which used to have that consttraint actually needs it any more. Tested with 2.02.171(2)-RHEL7 -> WFM Please post your testing results when closing a bug. Also, the latest and current released 7.4 version of lvm was lvm2-2.02.171-8, not lvm2-2.02.171-2. Our testing still shows the "failing" behavior, ie in all these examples, the region size and lv sizes are the same, so definitely a multiple of. If that behavior is expected then please let us know and we'll change the test's expected results. SCENARIO (raid4) - [raid_regionsize_create_check] Create a raids using a non default region sizes, then verify it's honored lvcreate --type raid4 -i 3 -n region_check.256.00k -L 1G -R 256.00k raid_sanity lvcreate --type raid4 -i 3 -n region_check.512.00k -L 1G -R 512.00k raid_sanity lvcreate --type raid4 -i 3 -n region_check.1.00m -L 1G -R 1.00m raid_sanity lvcreate --type raid4 -i 3 -n region_check.4.00m -L 1G -R 4.00m raid_sanity lvcreate --type raid4 -i 3 -n region_check.16.00m -L 1G -R 16.00m raid_sanity current region size doesn't match size given 8.00m ne 16.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid4 -i 3 -n region_check.32.00m -L 1G -R 32.00m raid_sanity current region size doesn't match size given 8.00m ne 32.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid4 -i 3 -n region_check.64.00m -L 1G -R 64.00m raid_sanity current region size doesn't match size given 8.00m ne 64.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid4 -i 3 -n region_check.128.00m -L 1G -R 128.00m raid_sanity current region size doesn't match size given 8.00m ne 128.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid4 -i 3 -n region_check.256.00m -L 1G -R 256.00m raid_sanity current region size doesn't match size given 8.00m ne 256.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid4 -i 3 -n region_check.512.00m -L 1G -R 512.00m raid_sanity current region size doesn't match size given 8.00m ne 512.00m This is bug 1404007, remove hack if this is ever fixed. SCENARIO (raid5) - [raid_regionsize_create_check] Create a raids using a non default region sizes, then verify it's honored lvcreate --type raid5 -i 3 -n region_check.256.00k -L 1G -R 256.00k raid_sanity lvcreate --type raid5 -i 3 -n region_check.512.00k -L 1G -R 512.00k raid_sanity lvcreate --type raid5 -i 3 -n region_check.1.00m -L 1G -R 1.00m raid_sanity lvcreate --type raid5 -i 3 -n region_check.4.00m -L 1G -R 4.00m raid_sanity lvcreate --type raid5 -i 3 -n region_check.16.00m -L 1G -R 16.00m raid_sanity current region size doesn't match size given 8.00m ne 16.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid5 -i 3 -n region_check.32.00m -L 1G -R 32.00m raid_sanity current region size doesn't match size given 8.00m ne 32.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid5 -i 3 -n region_check.64.00m -L 1G -R 64.00m raid_sanity current region size doesn't match size given 8.00m ne 64.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid5 -i 3 -n region_check.128.00m -L 1G -R 128.00m raid_sanity current region size doesn't match size given 8.00m ne 128.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid5 -i 3 -n region_check.256.00m -L 1G -R 256.00m raid_sanity current region size doesn't match size given 8.00m ne 256.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid5 -i 3 -n region_check.512.00m -L 1G -R 512.00m raid_sanity current region size doesn't match size given 8.00m ne 512.00m This is bug 1404007, remove hack if this is ever fixed. SCENARIO (raid6) - [raid_regionsize_create_check] Create a raids using a non default region sizes, then verify it's honored lvcreate --type raid6 -i 3 -n region_check.256.00k -L 1G -R 256.00k raid_sanity lvcreate --type raid6 -i 3 -n region_check.512.00k -L 1G -R 512.00k raid_sanity lvcreate --type raid6 -i 3 -n region_check.1.00m -L 1G -R 1.00m raid_sanity lvcreate --type raid6 -i 3 -n region_check.4.00m -L 1G -R 4.00m raid_sanity lvcreate --type raid6 -i 3 -n region_check.16.00m -L 1G -R 16.00m raid_sanity current region size doesn't match size given 8.00m ne 16.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid6 -i 3 -n region_check.32.00m -L 1G -R 32.00m raid_sanity current region size doesn't match size given 8.00m ne 32.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid6 -i 3 -n region_check.64.00m -L 1G -R 64.00m raid_sanity current region size doesn't match size given 8.00m ne 64.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid6 -i 3 -n region_check.128.00m -L 1G -R 128.00m raid_sanity current region size doesn't match size given 8.00m ne 128.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid6 -i 3 -n region_check.256.00m -L 1G -R 256.00m raid_sanity current region size doesn't match size given 8.00m ne 256.00m This is bug 1404007, remove hack if this is ever fixed. lvcreate --type raid6 -i 3 -n region_check.512.00m -L 1G -R 512.00m raid_sanity current region size doesn't match size given 8.00m ne 512.00m This is bug 1404007, remove hack if this is ever fixed. Corey,
I'm saying it worked in 2.02.171-2. Are you saying you see a regression in 2.02.171-8 then?
Tests:
for r in 4 8 16 32 64 128 256;do lvcreate -y --ty raid5 -L1g --nosync -nr -R${r}M nvm;lvs -olvname,segtype,regionsize nvm;lvremove -y nvm/r;done
Using default stripesize 64.00 KiB.
WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
Logical volume "r" created.
LV Type Region
r raid5 4.00m
Logical volume "r" successfully removed
Using default stripesize 64.00 KiB.
WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
Logical volume "r" created.
LV Type Region
r raid5 8.00m
Logical volume "r" successfully removed
Using default stripesize 64.00 KiB.
WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
Logical volume "r" created.
LV Type Region
r raid5 16.00m
Logical volume "r" successfully removed
Using default stripesize 64.00 KiB.
WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
Logical volume "r" created.
LV Type Region
r raid5 32.00m
Logical volume "r" successfully removed
Using default stripesize 64.00 KiB.
WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
Logical volume "r" created.
LV Type Region
r raid5 64.00m
Logical volume "r" successfully removed
Using default stripesize 64.00 KiB.
WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
Logical volume "r" created.
LV Type Region
r raid5 128.00m
Logical volume "r" successfully removed
Using default stripesize 64.00 KiB.
WARNING: New raid5 won't be synchronised. Don't read what you didn't write!
Logical volume "r" created.
LV Type Region
r raid5 256.00m
Logical volume "r" successfully removed
lvcreate --ty raid5 -L1G -R32M --nosync -y -nr nvm;for r in 4 8 16 32 64 128 256;do lvconvert -y -R${r}M nvm/r;lvs -olvname,segtype,regionsize nvm;done
Using default stripesize 64.00 KiB.
Logical Volume "r" already exists in volume group "nvm"
Changed region size on RAID LV nvm/r to 4.00 MiB.
LV Type Region
r raid5 4.00m
Changed region size on RAID LV nvm/r to 8.00 MiB.
LV Type Region
r raid5 8.00m
Changed region size on RAID LV nvm/r to 16.00 MiB.
LV Type Region
r raid5 16.00m
Changed region size on RAID LV nvm/r to 32.00 MiB.
LV Type Region
r raid5 32.00m
Changed region size on RAID LV nvm/r to 64.00 MiB.
LV Type Region
r raid5 64.00m
Changed region size on RAID LV nvm/r to 128.00 MiB.
LV Type Region
r raid5 128.00m
Changed region size on RAID LV nvm/r to 256.00 MiB.
LV Type Region
r raid5 256.00m
There is no difference here in behavior wrt region size between 171-2 and 171-8. The only difference that matters between our two scripts is the use of the "-i|--stripes" argument. You'll see this behavior when using an odd numbered stripe argument. And like mentioned in comment #0 and comment #3, you'll see a message when it's happening "Using reduced mirror region size of ...". An lvconvert can make the region size work, so again, either 1. should the lvcreate just be able to this initially, or 2. should we also provide a message to the user to use lvconvert if this is really the desired behavior. [root@host-116 ~]# lvcreate --type raid5 -n region_check.32.00m_A -L 1g --nosync -R 32.00m raid_sanity Using default stripesize 64.00 KiB. WARNING: New raid5 won't be synchronised. Don't read what you didn't write! Logical volume "region_check.32.00m_A" created. [root@host-116 ~]# lvcreate --type raid5 -n region_check.32.00m_2 -i 2 -L 1g --nosync -R 32.00m raid_sanity Using default stripesize 64.00 KiB. WARNING: New raid5 won't be synchronised. Don't read what you didn't write! Logical volume "region_check.32.00m_2" created. [root@host-116 ~]# lvcreate --type raid5 -n region_check.32.00m_3 -i 3 -L 1g --nosync -R 32.00m raid_sanity Using default stripesize 64.00 KiB. Rounding size 1.00 GiB (256 extents) up to stripe boundary size <1.01 GiB(258 extents). WARNING: New raid5 won't be synchronised. Don't read what you didn't write! Using reduced mirror region size of 8.00 MiB Logical volume "region_check.32.00m_3" created. [root@host-116 ~]# lvcreate --type raid5 -n region_check.32.00m_4 -i 4 -L 1g --nosync -R 32.00m raid_sanity Using default stripesize 64.00 KiB. WARNING: New raid5 won't be synchronised. Don't read what you didn't write! Logical volume "region_check.32.00m_4" created. [root@host-116 ~]# lvcreate --type raid5 -n region_check.32.00m_5 -i 5 -L 1g --nosync -R 32.00m raid_sanity Using default stripesize 64.00 KiB. Rounding size 1.00 GiB (256 extents) up to stripe boundary size <1.02 GiB(260 extents). WARNING: New raid5 won't be synchronised. Don't read what you didn't write! Using reduced mirror region size of 16.00 MiB Logical volume "region_check.32.00m_5" created. [root@host-116 ~]# lvs -olvname,segtype,regionsize raid_sanity LV Type Region region_check.32.00m_2 raid5 32.00m region_check.32.00m_3 raid5 8.00m region_check.32.00m_4 raid5 32.00m region_check.32.00m_5 raid5 16.00m region_check.32.00m_A raid5 32.00m # Here's an lvconvert ultimately enforcing the desired region sizes: [root@host-116 ~]# lvconvert -R 32.00m raid_sanity/region_check.32.00m_3 Do you really want to change the region_size 8.00 MiB of LV raid_sanity/region_check.32.00m_3 to 32.00 MiB? [y/n]: y Changed region size on RAID LV raid_sanity/region_check.32.00m_3 to 32.00 MiB. [root@host-116 ~]# lvconvert -R 32.00m raid_sanity/region_check.32.00m_5 Do you really want to change the region_size 16.00 MiB of LV raid_sanity/region_check.32.00m_5 to 32.00 MiB? [y/n]: y Changed region size on RAID LV raid_sanity/region_check.32.00m_5 to 32.00 MiB. [root@host-116 ~]# lvs -olvname,segtype,regionsize raid_sanity LV Type Region region_check.32.00m_2 raid5 32.00m region_check.32.00m_3 raid5 32.00m region_check.32.00m_4 raid5 32.00m region_check.32.00m_5 raid5 32.00m region_check.32.00m_A raid5 32.00m (In reply to Corey Marthaler from comment #10) > There is no difference here in behavior wrt region size between 171-2 and > 171-8. > > The only difference that matters between our two scripts is the use of the > "-i|--stripes" argument. You'll see this behavior when using an odd numbered > stripe argument. And like mentioned in comment #0 and comment #3, you'll see > a message when it's happening "Using reduced mirror region size of ...". > Got it, commit 5f13e33d541f7af77f586ac55edfed336ad8dcc1 posted to remove "mirror" restrictions from "raid". Fix verified for odd legged striped raid4|5|6 volumes in the latest rpms. 3.10.0-772.el7.x86_64 lvm2-2.02.176-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 lvm2-libs-2.02.176-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 lvm2-cluster-2.02.176-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 lvm2-lockd-2.02.176-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 lvm2-python-boom-0.8-4.el7 BUILT: Wed Nov 15 04:23:09 CST 2017 cmirror-2.02.176-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 device-mapper-1.02.145-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 device-mapper-libs-1.02.145-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 device-mapper-event-1.02.145-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 device-mapper-event-libs-1.02.145-4.el7 BUILT: Wed Nov 15 04:21:19 CST 2017 device-mapper-persistent-data-0.7.3-2.el7 BUILT: Tue Oct 10 04:00:07 CDT 2017 SCENARIO (raid4) - [raid_regionsize_create_check] Create raids using non default region sizes, (and both odd and even stripe images where applicable based on type) then verify they're honored lvcreate --type raid4 -i 2 --nosync -n region_check.128.00k -L 1g -R 128.00k raid_sanity 128.00k eq 128.00k lvcreate --type raid4 -i 3 --nosync -n region_check.128.00k -L 1g -R 128.00k raid_sanity 128.00k eq 128.00k lvcreate --type raid4 -i 2 --nosync -n region_check.256.00k -L 1g -R 256.00k raid_sanity 256.00k eq 256.00k lvcreate --type raid4 -i 3 --nosync -n region_check.256.00k -L 1g -R 256.00k raid_sanity 256.00k eq 256.00k lvcreate --type raid4 -i 2 --nosync -n region_check.512.00k -L 1g -R 512.00k raid_sanity 512.00k eq 512.00k lvcreate --type raid4 -i 3 --nosync -n region_check.512.00k -L 1g -R 512.00k raid_sanity 512.00k eq 512.00k lvcreate --type raid4 -i 2 --nosync -n region_check.1.00m -L 1g -R 1.00m raid_sanity 1.00m eq 1.00m lvcreate --type raid4 -i 3 --nosync -n region_check.1.00m -L 1g -R 1.00m raid_sanity 1.00m eq 1.00m lvcreate --type raid4 -i 2 --nosync -n region_check.4.00m -L 1g -R 4.00m raid_sanity 4.00m eq 4.00m lvcreate --type raid4 -i 3 --nosync -n region_check.4.00m -L 1g -R 4.00m raid_sanity 4.00m eq 4.00m lvcreate --type raid4 -i 2 --nosync -n region_check.16.00m -L 1g -R 16.00m raid_sanity 16.00m eq 16.00m lvcreate --type raid4 -i 3 --nosync -n region_check.16.00m -L 1g -R 16.00m raid_sanity 16.00m eq 16.00m lvcreate --type raid4 -i 2 --nosync -n region_check.32.00m -L 1g -R 32.00m raid_sanity 32.00m eq 32.00m lvcreate --type raid4 -i 3 --nosync -n region_check.32.00m -L 1g -R 32.00m raid_sanity 32.00m eq 32.00m lvcreate --type raid4 -i 2 --nosync -n region_check.64.00m -L 1g -R 64.00m raid_sanity 64.00m eq 64.00m lvcreate --type raid4 -i 3 --nosync -n region_check.64.00m -L 1g -R 64.00m raid_sanity 64.00m eq 64.00m lvcreate --type raid4 -i 2 --nosync -n region_check.128.00m -L 1g -R 128.00m raid_sanity 128.00m eq 128.00m lvcreate --type raid4 -i 3 --nosync -n region_check.128.00m -L 1g -R 128.00m raid_sanity 128.00m eq 128.00m lvcreate --type raid4 -i 2 --nosync -n region_check.256.00m -L 1g -R 256.00m raid_sanity 256.00m eq 256.00m lvcreate --type raid4 -i 3 --nosync -n region_check.256.00m -L 1g -R 256.00m raid_sanity 256.00m eq 256.00m lvcreate --type raid4 -i 2 --nosync -n region_check.512.00m -L 1g -R 512.00m raid_sanity 512.00m eq 512.00m lvcreate --type raid4 -i 3 --nosync -n region_check.512.00m -L 1g -R 512.00m raid_sanity 512.00m eq 512.00m SCENARIO (raid5) - [raid_regionsize_create_check] Create raids using non default region sizes, (and both odd and even stripe images where applicable based on type) then verify they're honored lvcreate --type raid5 -i 2 --nosync -n region_check.128.00k -L 1g -R 128.00k raid_sanity 128.00k eq 128.00k lvcreate --type raid5 -i 3 --nosync -n region_check.128.00k -L 1g -R 128.00k raid_sanity 128.00k eq 128.00k lvcreate --type raid5 -i 2 --nosync -n region_check.256.00k -L 1g -R 256.00k raid_sanity 256.00k eq 256.00k lvcreate --type raid5 -i 3 --nosync -n region_check.256.00k -L 1g -R 256.00k raid_sanity 256.00k eq 256.00k lvcreate --type raid5 -i 2 --nosync -n region_check.512.00k -L 1g -R 512.00k raid_sanity 512.00k eq 512.00k lvcreate --type raid5 -i 3 --nosync -n region_check.512.00k -L 1g -R 512.00k raid_sanity 512.00k eq 512.00k lvcreate --type raid5 -i 2 --nosync -n region_check.1.00m -L 1g -R 1.00m raid_sanity 1.00m eq 1.00m lvcreate --type raid5 -i 3 --nosync -n region_check.1.00m -L 1g -R 1.00m raid_sanity 1.00m eq 1.00m lvcreate --type raid5 -i 2 --nosync -n region_check.4.00m -L 1g -R 4.00m raid_sanity 4.00m eq 4.00m lvcreate --type raid5 -i 3 --nosync -n region_check.4.00m -L 1g -R 4.00m raid_sanity 4.00m eq 4.00m lvcreate --type raid5 -i 2 --nosync -n region_check.16.00m -L 1g -R 16.00m raid_sanity 16.00m eq 16.00m lvcreate --type raid5 -i 3 --nosync -n region_check.16.00m -L 1g -R 16.00m raid_sanity 16.00m eq 16.00m lvcreate --type raid5 -i 2 --nosync -n region_check.32.00m -L 1g -R 32.00m raid_sanity 32.00m eq 32.00m lvcreate --type raid5 -i 3 --nosync -n region_check.32.00m -L 1g -R 32.00m raid_sanity 32.00m eq 32.00m lvcreate --type raid5 -i 2 --nosync -n region_check.64.00m -L 1g -R 64.00m raid_sanity 64.00m eq 64.00m lvcreate --type raid5 -i 3 --nosync -n region_check.64.00m -L 1g -R 64.00m raid_sanity 64.00m eq 64.00m lvcreate --type raid5 -i 2 --nosync -n region_check.128.00m -L 1g -R 128.00m raid_sanity 128.00m eq 128.00m lvcreate --type raid5 -i 3 --nosync -n region_check.128.00m -L 1g -R 128.00m raid_sanity 128.00m eq 128.00m lvcreate --type raid5 -i 2 --nosync -n region_check.256.00m -L 1g -R 256.00m raid_sanity 256.00m eq 256.00m lvcreate --type raid5 -i 3 --nosync -n region_check.256.00m -L 1g -R 256.00m raid_sanity 256.00m eq 256.00m lvcreate --type raid5 -i 2 --nosync -n region_check.512.00m -L 1g -R 512.00m raid_sanity 512.00m eq 512.00m lvcreate --type raid5 -i 3 --nosync -n region_check.512.00m -L 1g -R 512.00m raid_sanity 512.00m eq 512.00m SCENARIO (raid6) - [raid_regionsize_create_check] Create raids using non default region sizes, (and both odd and even stripe images where applicable based on type) then verify they're honored lvcreate --type raid6 -i 3 -n region_check.128.00k -L 1g -R 128.00k raid_sanity 128.00k eq 128.00k lvcreate --type raid6 -i 3 -n region_check.128.00k -L 1g -R 128.00k raid_sanity 128.00k eq 128.00k lvcreate --type raid6 -i 3 -n region_check.256.00k -L 1g -R 256.00k raid_sanity 256.00k eq 256.00k lvcreate --type raid6 -i 3 -n region_check.256.00k -L 1g -R 256.00k raid_sanity 256.00k eq 256.00k lvcreate --type raid6 -i 3 -n region_check.512.00k -L 1g -R 512.00k raid_sanity 512.00k eq 512.00k lvcreate --type raid6 -i 3 -n region_check.512.00k -L 1g -R 512.00k raid_sanity 512.00k eq 512.00k lvcreate --type raid6 -i 3 -n region_check.1.00m -L 1g -R 1.00m raid_sanity 1.00m eq 1.00m lvcreate --type raid6 -i 3 -n region_check.1.00m -L 1g -R 1.00m raid_sanity 1.00m eq 1.00m lvcreate --type raid6 -i 3 -n region_check.4.00m -L 1g -R 4.00m raid_sanity 4.00m eq 4.00m lvcreate --type raid6 -i 3 -n region_check.4.00m -L 1g -R 4.00m raid_sanity 4.00m eq 4.00m lvcreate --type raid6 -i 3 -n region_check.16.00m -L 1g -R 16.00m raid_sanity 16.00m eq 16.00m lvcreate --type raid6 -i 3 -n region_check.16.00m -L 1g -R 16.00m raid_sanity 16.00m eq 16.00m lvcreate --type raid6 -i 3 -n region_check.32.00m -L 1g -R 32.00m raid_sanity 32.00m eq 32.00m lvcreate --type raid6 -i 3 -n region_check.32.00m -L 1g -R 32.00m raid_sanity 32.00m eq 32.00m lvcreate --type raid6 -i 3 -n region_check.64.00m -L 1g -R 64.00m raid_sanity 64.00m eq 64.00m lvcreate --type raid6 -i 3 -n region_check.64.00m -L 1g -R 64.00m raid_sanity 64.00m eq 64.00m lvcreate --type raid6 -i 3 -n region_check.128.00m -L 1g -R 128.00m raid_sanity 128.00m eq 128.00m lvcreate --type raid6 -i 3 -n region_check.128.00m -L 1g -R 128.00m raid_sanity 128.00m eq 128.00m lvcreate --type raid6 -i 3 -n region_check.256.00m -L 1g -R 256.00m raid_sanity 256.00m eq 256.00m lvcreate --type raid6 -i 3 -n region_check.256.00m -L 1g -R 256.00m raid_sanity 256.00m eq 256.00m lvcreate --type raid6 -i 3 -n region_check.512.00m -L 1g -R 512.00m raid_sanity 512.00m eq 512.00m lvcreate --type raid6 -i 3 -n region_check.512.00m -L 1g -R 512.00m raid_sanity 512.00m eq 512.00m Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0853 |