Bug 1367177 - inconsistent use of the "Adjusting stripes to the minimum of" for raid types
Summary: inconsistent use of the "Adjusting stripes to the minimum of" for raid types
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-15 19:22 UTC by Corey Marthaler
Modified: 2016-11-04 04:17 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.164-3.el7
Doc Type: No Doc Update
Doc Text:
Intra-release bug, no documentation needed.
Clone Of:
Environment:
Last Closed: 2016-11-04 04:17:44 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1445 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-11-03 13:46:41 UTC

Description Corey Marthaler 2016-08-15 19:22:32 UTC
Description of problem:
# RAID 6
[root@host-079 ~]# lvcreate  --type raid6 -i 0 -n raid6A -L 500M raid_sanity
  --stripes may not be zero.
  Run `lvcreate --help' for more information.

Shouldn't this case have the new "Adjusting stripes to the minimum" logic?
[root@host-079 ~]# lvcreate  --type raid6 -i 2 -n raid6A -L 500M raid_sanity
  Using default stripesize 64.00 KiB.
  Number of stripes must be at least 3 for raid6


This appears to have been adjusted, but doesn't have the actual "Adjusting stripes to the minimum" message?
[root@host-079 ~]# lvcreate  --type raid6 -i 1 -n raid6A -L 500M raid_sanity
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid6A" created.

[root@host-079 ~]# lvs -a -o +devices
  LV                VG            Attr       LSize   Cpy%Sync Devices
  raid6A            raid_sanity   rwi-a-r--- 504.00m 100.00   raid6A_rimage_0(0),raid6A_rimage_1(0),raid6A_rimage_2(0),raid6A_rimage_3(0),raid6A_rimage_4(0)
  [raid6A_rimage_0] raid_sanity   iwi-aor--- 168.00m          /dev/sde2(1)
  [raid6A_rimage_1] raid_sanity   iwi-aor--- 168.00m          /dev/sde1(1)
  [raid6A_rimage_2] raid_sanity   iwi-aor--- 168.00m          /dev/sdf2(1)
  [raid6A_rimage_3] raid_sanity   iwi-aor--- 168.00m          /dev/sdf1(1)
  [raid6A_rimage_4] raid_sanity   iwi-aor--- 168.00m          /dev/sdg2(1)
  [raid6A_rmeta_0]  raid_sanity   ewi-aor---   4.00m          /dev/sde2(0)
  [raid6A_rmeta_1]  raid_sanity   ewi-aor---   4.00m          /dev/sde1(0)
  [raid6A_rmeta_2]  raid_sanity   ewi-aor---   4.00m          /dev/sdf2(0)
  [raid6A_rmeta_3]  raid_sanity   ewi-aor---   4.00m          /dev/sdf1(0)
  [raid6A_rmeta_4]  raid_sanity   ewi-aor---   4.00m          /dev/sdg2(0)




# RAID 10
[root@host-079 ~]# lvcreate  --type raid10 -i 0 -n raid10A -L 500M raid_sanity
  --stripes may not be zero.
  Run `lvcreate --help' for more information.

[root@host-079 ~]# lvcreate  --type raid10 -i 1 -n raid10A -L 500M raid_sanity
  Using default stripesize 64.00 KiB.
  Adjusting stripes to the minimum of 2 for raid10.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents).
  Logical volume "raid10A" created.



Version-Release number of selected component (if applicable):
3.10.0-480.el7.x86_64

lvm2-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
lvm2-libs-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
lvm2-cluster-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-libs-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-event-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-event-libs-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016

Comment 1 Alasdair Kergon 2016-08-15 23:22:33 UTC
Well given the level of complexity here, I think the new approach does need to be stricter.

If you don't supply --stripes, then we should automatically use an appropriate default value.

If you do supply --stripes then we should either use the value you supplied or give an error if we cannot.  We should no longer adjust it.

Comment 4 Corey Marthaler 2016-08-26 23:16:43 UTC
This appears fairly consistent now. The only issue being bug 1370658 (raid10). Marking verified in the latest rpms.


lvm2-2.02.164-3.el7    BUILT: Wed Aug 24 05:20:41 CDT 2016
lvm2-libs-2.02.164-3.el7    BUILT: Wed Aug 24 05:20:41 CDT 2016
lvm2-cluster-2.02.164-3.el7    BUILT: Wed Aug 24 05:20:41 CDT 2016



[root@host-117 ~]# lvcreate -i 0 --type raid0 -L 100M -n raid0 test 
  --stripes may not be zero.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 1 --type raid0 -L 100M -n raid0 test 
  Using default stripesize 64.00 KiB.
  Minimum of 2 stripes required for raid0.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 0 --type raid0_meta -L 100M -n raid0 test 
  --stripes may not be zero.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 1 --type raid0_meta -L 100M -n raid0 test 
  Using default stripesize 64.00 KiB.
  Minimum of 2 stripes required for raid0_meta.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 100 --type raid0_meta -L 100M -n raid0 test 
  Using default stripesize 64.00 KiB.
  Only up to 64 stripes in raid0_meta supported currently.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 1 --type raid4 -L 100M -n raid4 test 
  Using default stripesize 64.00 KiB.
  Minimum of 2 stripes required for raid4.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 1 --type raid5 -L 100M -n raid5 test 
  Using default stripesize 64.00 KiB.
  Minimum of 2 stripes required for raid5.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 1 --type raid6 -L 100M -n raid6 test 
  Using default stripesize 64.00 KiB.
  Minimum of 3 stripes required for raid6.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 2 --type raid6 -L 100M -n raid6 test 
  Using default stripesize 64.00 KiB.
  Minimum of 3 stripes required for raid6.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 1 --type raid10 -L 100M -n raid10 test 
  Using default stripesize 64.00 KiB.
  Minimum of 2 stripes required for raid10.
  Run `lvcreate --help' for more information.

[root@host-117 ~]# lvcreate -i 100 --type raid10 -L 100M -n raid10 test 
  Using default stripesize 64.00 KiB.
  Only up to 64 stripes in raid10 supported currently.
  Run `lvcreate --help' for more information.

Comment 6 errata-xmlrpc 2016-11-04 04:17:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html


Note You need to log in before you can comment on or make changes to this bug.