Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
When 2 disk redundancy is required and future expandability is needed, RAID 6 allows for setting up a 3-disk array and expanding it later.
lvm2 does not allow creation of 3-disk RAID6 arrays. While 3-disk RAID1 arrays provide the same redundancy level, migration from 3-disk RAID1 to 4-disk RAID6 requires an intermediate RAID5 step. During migration the data will have just 1 disk redundancy, migration that can take upwards of 20h for 16TB HDDs. For disks that have unrecoverable error rate at 1 per 10^14 bits read level.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
truncate -s 1G disk1
truncate -s 1G disk2
truncate -s 1G disk3
DEVS=($(losetup -f --show /tmp/disk1))
DEVS+=($(losetup -f --show /tmp/disk2))
DEVS+=($(losetup -f --show /tmp/disk3))
for file in "${DEVS[@]}"; do pvcreate "$file"; done
vgcreate redundant ${DEVS[@]}
lvcreate --type raid6 -L 500M --stripes 1 redundant -n data
Actual results:
Physical volume "/dev/loop1" successfully created.
Physical volume "/dev/loop2" successfully created.
Physical volume "/dev/loop3" successfully created.
Volume group "redundant" successfully created
Minimum of 3 stripes required for raid6.
Run `lvcreate --help' for more information.
Expected results:
an LV with RAID 6 using 3 PVs created
Additional info:
mdadm does allow creation of 3-disk RAID6 arrays.
Trying to create a 4-disk RAID6 array fails similarly, using --stripes 3 with just 4 PVs available results in the following error:
Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB(126 extents).
Insufficient suitable allocatable extents for logical volume data: 129 more required
Comment 1Heinz Mauelshagen
2020-03-30 16:09:24 UTC
mdadm requires at least 4 disks for raid6 (2 data minimum to calculate the P-/Q-sydromes on to store them on 3+4, with data and syndromes typically rotating).
# mdadm -C /dev/md1 -l6 -n 3
mdadm: at least 4 raid-devices needed for level 6
Comment 2Jonathan Earl Brassow
2020-05-05 16:11:15 UTC
are customers actually asking for this? or is this just a nice to have?
If the former, can you provide some evidence of this? If the latter, I am likely to close WONTFIX (sorry).
I don't work in GSS so don't know if (many) customers are asking for it.
I looked at it for my own needs (and deployed on Fedora with mdadm because LVM has such limitations).
so your choice, if you don't want to make sure lvm2 is a replacement for mdadm in all scenarios, close it or move to Fedora
Description of problem: When 2 disk redundancy is required and future expandability is needed, RAID 6 allows for setting up a 3-disk array and expanding it later. lvm2 does not allow creation of 3-disk RAID6 arrays. While 3-disk RAID1 arrays provide the same redundancy level, migration from 3-disk RAID1 to 4-disk RAID6 requires an intermediate RAID5 step. During migration the data will have just 1 disk redundancy, migration that can take upwards of 20h for 16TB HDDs. For disks that have unrecoverable error rate at 1 per 10^14 bits read level. Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: truncate -s 1G disk1 truncate -s 1G disk2 truncate -s 1G disk3 DEVS=($(losetup -f --show /tmp/disk1)) DEVS+=($(losetup -f --show /tmp/disk2)) DEVS+=($(losetup -f --show /tmp/disk3)) for file in "${DEVS[@]}"; do pvcreate "$file"; done vgcreate redundant ${DEVS[@]} lvcreate --type raid6 -L 500M --stripes 1 redundant -n data Actual results: Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "redundant" successfully created Minimum of 3 stripes required for raid6. Run `lvcreate --help' for more information. Expected results: an LV with RAID 6 using 3 PVs created Additional info: mdadm does allow creation of 3-disk RAID6 arrays. Trying to create a 4-disk RAID6 array fails similarly, using --stripes 3 with just 4 PVs available results in the following error: Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB(126 extents). Insufficient suitable allocatable extents for logical volume data: 129 more required