Bug 1818825 - [RFE] Support 4-disk RAID6 in LVM
Summary: [RFE] Support 4-disk RAID6 in LVM
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.3
Hardware: Unspecified
OS: Unspecified
low
unspecified
Target Milestone: rc
: 8.0
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-30 13:10 UTC by Hubert Kario
Modified: 2021-09-09 11:40 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-09 11:40:01 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Hubert Kario 2020-03-30 13:10:03 UTC
Description of problem:
When 2 disk redundancy is required and future expandability is needed, RAID 6 allows for setting up a 3-disk array and expanding it later.

lvm2 does not allow creation of 3-disk RAID6 arrays. While 3-disk RAID1 arrays provide the same redundancy level, migration from 3-disk RAID1 to 4-disk RAID6 requires an intermediate RAID5 step. During migration the data will have just 1 disk redundancy, migration that can take upwards of 20h for 16TB HDDs. For disks that have unrecoverable error rate at 1 per 10^14 bits read level.

Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
truncate -s 1G disk1
truncate -s 1G disk2
truncate -s 1G disk3
DEVS=($(losetup -f --show /tmp/disk1))
DEVS+=($(losetup -f --show /tmp/disk2))
DEVS+=($(losetup -f --show /tmp/disk3))
for file in "${DEVS[@]}"; do pvcreate "$file"; done
vgcreate redundant ${DEVS[@]}
lvcreate --type raid6 -L 500M --stripes 1 redundant -n data

Actual results:
  Physical volume "/dev/loop1" successfully created.
  Physical volume "/dev/loop2" successfully created.
  Physical volume "/dev/loop3" successfully created.
  Volume group "redundant" successfully created

  Minimum of 3 stripes required for raid6.
  Run `lvcreate --help' for more information.

Expected results:
an LV with RAID 6 using 3 PVs created

Additional info:
mdadm does allow creation of 3-disk RAID6 arrays.

Trying to create a 4-disk RAID6 array fails similarly, using --stripes 3 with just 4 PVs available results in the following error:

  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB(126 extents).
  Insufficient suitable allocatable extents for logical volume data: 129 more required

Comment 1 Heinz Mauelshagen 2020-03-30 16:09:24 UTC
mdadm requires at least 4 disks for raid6 (2 data minimum to calculate the P-/Q-sydromes on to store them on 3+4, with data and syndromes typically rotating).

# mdadm -C /dev/md1 -l6 -n 3
mdadm: at least 4 raid-devices needed for level 6

Comment 2 Jonathan Earl Brassow 2020-05-05 16:11:15 UTC
are customers actually asking for this?  or is this just a nice to have?

If the former, can you provide some evidence of this?  If the latter, I am likely to close WONTFIX (sorry).

Comment 4 Hubert Kario 2020-05-05 17:30:08 UTC
I don't work in GSS so don't know if (many) customers are asking for it.
I looked at it for my own needs (and deployed on Fedora with mdadm because LVM has such limitations).

so your choice, if you don't want to make sure lvm2 is a replacement for mdadm in all scenarios, close it or move to Fedora


Note You need to log in before you can comment on or make changes to this bug.