Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1907704

Summary: "Insufficient suitable allocatable extents for logical volume" no longer an error when attempting conversion wo enough PVs
Product: Red Hat Enterprise Linux 8 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Heinz Mauelshagen <heinzm>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED NOTABUG Docs Contact:
Severity: low    
Priority: low CC: agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac
Version: 8.4Flags: pm-rhel: mirror+
Target Milestone: rc   
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-06-14 11:01:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2020-12-15 02:36:47 UTC
Description of problem:
Per the verification of bug 1438013, stripe -> raid1 conversion attempts wo enough PVs should fail with "Insufficient suitable allocatable extents for logical volume". If that is no longer the case, and instead an interim attempt to raid5_n should now be done we can adjust tests accordingly.

[root@hayes-03 ~]# lvcreate --yes  --type striped -i 10 -n stripe_takeover -L 300M raid_sanity
  Using default stripesize 64.00 KiB.
  Rounding size 300.00 MiB (75 extents) up to stripe boundary size 320.00 MiB(80 extents).
  Logical volume "stripe_takeover" created.

[root@hayes-03 ~]# lvs -a -o +devices
  LV              VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                                                                          
  stripe_takeover raid_sanity -wi-a----- 320.00m                                                     /dev/sdb1(0),/dev/sdc1(0),/dev/sdd1(0),/dev/sde1(0),/dev/sdf1(0),/dev/sdg1(0),/dev/sdh1(0),/dev/sdi1(0),/dev/sdj1(0),/dev/sdk1(0)

[root@hayes-03 ~]# lvconvert --yes --type raid1 raid_sanity/stripe_takeover
  Replaced LV type raid1 with possible type raid5_n.
  Repeat this command to convert to raid1 after an interim conversion has finished.
  Logical volume raid_sanity/stripe_takeover successfully converted.

[root@hayes-03 ~]# lvs -a -o +devices
  LV                          VG          Attr       LSize   Cpy%Sync Convert Devices
  stripe_takeover             raid_sanity rwi-a-r--- 320.00m 100.00           stripe_takeover_rimage_0(0),stripe_takeover_rimage_1(0),stripe_takeover_rimage_2(0),stripe_takeover_rimage_3(0),stripe_takeover_rimage_4(0),stripe_takeover_rimage_5(0),stripe_takeover_rimage_6(0),stripe_takeover_rimage_7(0),stripe_takeover_rimage_8(0),stripe_takeover_rimage_9(0),stripe_takeover_rimage_10(0)
  [stripe_takeover_rimage_0]  raid_sanity iwi-aor---  32.00m                  /dev/sdb1(0
  [stripe_takeover_rimage_1]  raid_sanity iwi-aor---  32.00m                  /dev/sdc1(0
  [stripe_takeover_rimage_10] raid_sanity iwi-aor---  32.00m                  /dev/sdl1(1
  [stripe_takeover_rimage_2]  raid_sanity iwi-aor---  32.00m                  /dev/sdd1(0)
  [stripe_takeover_rimage_3]  raid_sanity iwi-aor---  32.00m                  /dev/sde1(0)
  [stripe_takeover_rimage_4]  raid_sanity iwi-aor---  32.00m                  /dev/sdf1(0)
  [stripe_takeover_rimage_5]  raid_sanity iwi-aor---  32.00m                  /dev/sdg1(0)
  [stripe_takeover_rimage_6]  raid_sanity iwi-aor---  32.00m                  /dev/sdh1(0)
  [stripe_takeover_rimage_7]  raid_sanity iwi-aor---  32.00m                  /dev/sdi1(0)
  [stripe_takeover_rimage_8]  raid_sanity iwi-aor---  32.00m                  /dev/sdj1(0)
  [stripe_takeover_rimage_9]  raid_sanity iwi-aor---  32.00m                  /dev/sdk1(0)
  [stripe_takeover_rmeta_0]   raid_sanity ewi-aor---   4.00m                  /dev/sdb1(8)
  [stripe_takeover_rmeta_1]   raid_sanity ewi-aor---   4.00m                  /dev/sdc1(8)
  [stripe_takeover_rmeta_10]  raid_sanity ewi-aor---   4.00m                  /dev/sdl1(0)
  [stripe_takeover_rmeta_2]   raid_sanity ewi-aor---   4.00m                  /dev/sdd1(8)
  [stripe_takeover_rmeta_3]   raid_sanity ewi-aor---   4.00m                  /dev/sde1(8)
  [stripe_takeover_rmeta_4]   raid_sanity ewi-aor---   4.00m                  /dev/sdf1(8)
  [stripe_takeover_rmeta_5]   raid_sanity ewi-aor---   4.00m                  /dev/sdg1(8)
  [stripe_takeover_rmeta_6]   raid_sanity ewi-aor---   4.00m                  /dev/sdh1(8)
  [stripe_takeover_rmeta_7]   raid_sanity ewi-aor---   4.00m                  /dev/sdi1(8)
  [stripe_takeover_rmeta_8]   raid_sanity ewi-aor---   4.00m                  /dev/sdj1(8)
  [stripe_takeover_rmeta_9]   raid_sanity ewi-aor---   4.00m                  /dev/sdk1(8)

[root@hayes-03 ~]# lvconvert --yes --type raid1 raid_sanity/stripe_takeover
  Converting raid5_n LV raid_sanity/stripe_takeover to 2 stripes first.
  WARNING: Removing stripes from active logical volume raid_sanity/stripe_takeover will shrink it from 320.00 MiB to 32.00 MiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  If that leaves the logical volume larger than 800 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 raid_sanity/stripe_takeover"
  Can't remove stripes without --force option.
  Reshape request failed on LV raid_sanity/stripe_takeover.
  

Version-Release number of selected component (if applicable):
kernel-4.18.0-240.el8    BUILT: Wed Sep 23 04:46:11 CDT 2020
lvm2-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-libs-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020

Comment 3 Heinz Mauelshagen 2022-06-14 11:01:56 UTC
The tekover+reshape sequence striped -> raid5_n (11 stripes, 10 data stripes) -> raid5_n (2 stripes, 1 data stripe) -> raid1
is the mandatory conversion path from striped -> raid1 repeating "lvconvert -y -f --type raid1 $LV", closing.

Comment 4 Heinz Mauelshagen 2022-06-14 11:02:30 UTC
*takeover+reshape*