This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2235921 - raid extension causes 'Internal error: Unreleased memory pool(s) found' when executed on altered sized PVs
Summary: raid extension causes 'Internal error: Unreleased memory pool(s) found' when ...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: lvm2
Version: 9.3
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: LVM Team
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-30 04:08 UTC by Corey Marthaler
Modified: 2023-09-23 19:24 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-23 19:24:00 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-8377 0 None Migrated None 2023-09-23 19:23:58 UTC
Red Hat Issue Tracker RHELPLAN-166857 0 None None None 2023-08-30 04:09:52 UTC

Description Corey Marthaler 2023-08-30 04:08:12 UTC
Description of problem:
SCENARIO (raid1) - [extend_raid_to_100_percent_vg_diff_sized_pvs_more_than_needed]
Create a raid on a VG and then extend it using -l/-l+100%FREE with PVs being different sizes and more PVs than needed
Recreating PVs/VG with different sized devices and more than enough needed to make raid type volume
grant-03.6a2m.lab.eng.bos.redhat.com: pvcreate --yes --setphysicalvolumesize 500M /dev/sde1 /dev/sdf1 /dev/sda1
grant-03.6a2m.lab.eng.bos.redhat.com: pvcreate --yes --setphysicalvolumesize 1G /dev/sdg1 /dev/nvme1n1p1 /dev/sdb1
grant-03.6a2m.lab.eng.bos.redhat.com: vgcreate  raid_sanity /dev/sdg1 /dev/sde1 /dev/nvme1n1p1 /dev/sdf1 /dev/sdb1 /dev/sda1
lvcreate --yes  --type raid1 -m 1 -n 100_percent -L 100M raid_sanity


[root@grant-03 ~]# pvscan
  PV /dev/sdg1        VG raid_sanity     lvm2 [1020.00 MiB / 916.00 MiB free]
  PV /dev/sde1        VG raid_sanity     lvm2 [496.00 MiB / 392.00 MiB free]
  PV /dev/nvme1n1p1   VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sdf1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/sdb1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sda1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]

[root@grant-03 ~]# lvs -a -o +devices
  LV                     VG            Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                        
  100_percent            raid_sanity   rwi-a-r---  100.00m                                    100.00           100_percent_rimage_0(0),100_percent_rimage_1(0)
  [100_percent_rimage_0] raid_sanity   iwi-aor---  100.00m                                                     /dev/sdg1(1)                                   
  [100_percent_rimage_1] raid_sanity   iwi-aor---  100.00m                                                     /dev/sde1(1)                                   
  [100_percent_rmeta_0]  raid_sanity   ewi-aor---    4.00m                                                     /dev/sdg1(0)                                   
  [100_percent_rmeta_1]  raid_sanity   ewi-aor---    4.00m                                                     /dev/sde1(0)                                   

[root@grant-03 ~]# lvextend -l+100%FREE raid_sanity/100_percent
  Extending 2 mirror images.
  LV raid_sanity/100_percent_rimage_1 using PV /dev/sdg1 is not redundant.
  Insufficient suitable allocatable extents for logical volume raid_sanity/100_percent
  Internal error: Removing still active LV raid_sanity/100_percent_rmeta_0.
  You have a memory leak (not released memory pool):
   [0x55a71784dd70] allocation
  Internal error: Unreleased memory pool(s) found.



Version-Release number of selected component (if applicable):
kernel-5.14.0-360.el9    BUILT: Wed Aug 23 07:48:06 PM CEST 2023
lvm2-2.03.21-3.el9    BUILT: Thu Jul 13 08:50:26 PM CEST 2023
lvm2-libs-2.03.21-3.el9    BUILT: Thu Jul 13 08:50:26 PM CEST 2023


How reproducible:
Everytime

Comment 2 Corey Marthaler 2023-09-05 17:49:19 UTC
This similar creation scenario appears related. Creation now fails with "Insufficient suitable allocatable extents" now when using '-l100%FREE' with differing sized PVs, which is a regression from rhel8.9

# RHEL8.9

kernel-4.18.0-511.el8    BUILT: Fri Aug 18 17:12:35 CEST 2023
lvm2-2.03.14-11.el8    BUILT: Thu Jul 27 18:17:12 CEST 2023
lvm2-libs-2.03.14-11.el8    BUILT: Thu Jul 27 18:17:12 CEST 2023

[root@grant-01 ~]# pvcreate --yes --setphysicalvolumesize 500M /dev/sdb1 /dev/nvme0n1p1 /dev/sde1
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/nvme0n1p1" successfully created.
  Physical volume "/dev/sde1" successfully created.
[root@grant-01 ~]# pvcreate --yes --setphysicalvolumesize 1G /dev/sdc1 /dev/nvme1n1p1 /dev/sdg1
  Physical volume "/dev/sdc1" successfully created.
  Physical volume "/dev/nvme1n1p1" successfully created.
  Physical volume "/dev/sdg1" successfully created.
[root@grant-01 ~]# vgcreate  raid_sanity /dev/sdc1 /dev/sdb1 /dev/nvme1n1p1 /dev/nvme0n1p1 /dev/sdg1 /dev/sde1
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  Volume group "raid_sanity" successfully created
[root@grant-01 ~]# pvscan
  PV /dev/sdc1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sdb1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/nvme1n1p1   VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/nvme0n1p1   VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/sdg1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sde1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  Total: 6 [4.44 GiB] / in use: 6 [4.44 GiB] / in no VG: 0 [0   ]

[root@grant-01 ~]# lvcreate --yes  --type raid1 -m 1 -n 100_percent -l100%FREE raid_sanity
  Logical volume "100_percent" created.



# RHEL9.3

kernel-5.14.0-360.el9    BUILT: Wed Aug 23 07:48:06 PM CEST 2023
lvm2-2.03.21-3.el9    BUILT: Thu Jul 13 08:50:26 PM CEST 2023
lvm2-libs-2.03.21-3.el9    BUILT: Thu Jul 13 08:50:26 PM CEST 2023

[root@grant-03 ~]# pvcreate --yes --setphysicalvolumesize 500M /dev/sdb1 /dev/nvme0n1p1 /dev/sde1
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/nvme0n1p1" successfully created.
  Physical volume "/dev/sde1" successfully created.
[root@grant-03 ~]# pvcreate --yes --setphysicalvolumesize 1G /dev/sdc1 /dev/nvme1n1p1 /dev/sdg1
  Physical volume "/dev/sdc1" successfully created.
  Physical volume "/dev/nvme1n1p1" successfully created.
  Physical volume "/dev/sdg1" successfully created.
[root@grant-03 ~]# vgcreate  raid_sanity /dev/sdc1 /dev/sdb1 /dev/nvme1n1p1 /dev/nvme0n1p1 /dev/sdg1 /dev/sde1
  Volume group "raid_sanity" successfully created
[root@grant-03 ~]# pvscan
  PV /dev/sdd2        VG rhel_grant-03   lvm2 [<446.13 GiB / 0    free]
  PV /dev/sdc1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sdb1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/nvme1n1p1   VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/nvme0n1p1   VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/sdg1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sde1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  Total: 7 [450.57 GiB] / in use: 7 [450.57 GiB] / in no VG: 0 [0   ]
[root@grant-03 ~]# lvcreate --yes  --type raid1 -m 1 -n 100_percent -l100%FREE raid_sanity
  LV raid_sanity/100_percent_rimage_1 using PV /dev/sdg1 is not redundant.
  Insufficient suitable allocatable extents for logical volume raid_sanity/100_percent

Comment 3 RHEL Program Management 2023-09-23 19:23:09 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 4 RHEL Program Management 2023-09-23 19:24:00 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.