Bug 2235921

Summary: raid extension causes 'Internal error: Unreleased memory pool(s) found' when executed on altered sized PVs
Product: Red Hat Enterprise Linux 9 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: LVM Team <lvm-team>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED MIGRATED Docs Contact:
Severity: high    
Priority: unspecified CC: agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac
Version: 9.3Keywords: MigratedToJIRA, Regression
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-23 19:24:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2023-08-30 04:08:12 UTC
Description of problem:
SCENARIO (raid1) - [extend_raid_to_100_percent_vg_diff_sized_pvs_more_than_needed]
Create a raid on a VG and then extend it using -l/-l+100%FREE with PVs being different sizes and more PVs than needed
Recreating PVs/VG with different sized devices and more than enough needed to make raid type volume
grant-03.6a2m.lab.eng.bos.redhat.com: pvcreate --yes --setphysicalvolumesize 500M /dev/sde1 /dev/sdf1 /dev/sda1
grant-03.6a2m.lab.eng.bos.redhat.com: pvcreate --yes --setphysicalvolumesize 1G /dev/sdg1 /dev/nvme1n1p1 /dev/sdb1
grant-03.6a2m.lab.eng.bos.redhat.com: vgcreate  raid_sanity /dev/sdg1 /dev/sde1 /dev/nvme1n1p1 /dev/sdf1 /dev/sdb1 /dev/sda1
lvcreate --yes  --type raid1 -m 1 -n 100_percent -L 100M raid_sanity


[root@grant-03 ~]# pvscan
  PV /dev/sdg1        VG raid_sanity     lvm2 [1020.00 MiB / 916.00 MiB free]
  PV /dev/sde1        VG raid_sanity     lvm2 [496.00 MiB / 392.00 MiB free]
  PV /dev/nvme1n1p1   VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sdf1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/sdb1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sda1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]

[root@grant-03 ~]# lvs -a -o +devices
  LV                     VG            Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                        
  100_percent            raid_sanity   rwi-a-r---  100.00m                                    100.00           100_percent_rimage_0(0),100_percent_rimage_1(0)
  [100_percent_rimage_0] raid_sanity   iwi-aor---  100.00m                                                     /dev/sdg1(1)                                   
  [100_percent_rimage_1] raid_sanity   iwi-aor---  100.00m                                                     /dev/sde1(1)                                   
  [100_percent_rmeta_0]  raid_sanity   ewi-aor---    4.00m                                                     /dev/sdg1(0)                                   
  [100_percent_rmeta_1]  raid_sanity   ewi-aor---    4.00m                                                     /dev/sde1(0)                                   

[root@grant-03 ~]# lvextend -l+100%FREE raid_sanity/100_percent
  Extending 2 mirror images.
  LV raid_sanity/100_percent_rimage_1 using PV /dev/sdg1 is not redundant.
  Insufficient suitable allocatable extents for logical volume raid_sanity/100_percent
  Internal error: Removing still active LV raid_sanity/100_percent_rmeta_0.
  You have a memory leak (not released memory pool):
   [0x55a71784dd70] allocation
  Internal error: Unreleased memory pool(s) found.



Version-Release number of selected component (if applicable):
kernel-5.14.0-360.el9    BUILT: Wed Aug 23 07:48:06 PM CEST 2023
lvm2-2.03.21-3.el9    BUILT: Thu Jul 13 08:50:26 PM CEST 2023
lvm2-libs-2.03.21-3.el9    BUILT: Thu Jul 13 08:50:26 PM CEST 2023


How reproducible:
Everytime

Comment 2 Corey Marthaler 2023-09-05 17:49:19 UTC
This similar creation scenario appears related. Creation now fails with "Insufficient suitable allocatable extents" now when using '-l100%FREE' with differing sized PVs, which is a regression from rhel8.9

# RHEL8.9

kernel-4.18.0-511.el8    BUILT: Fri Aug 18 17:12:35 CEST 2023
lvm2-2.03.14-11.el8    BUILT: Thu Jul 27 18:17:12 CEST 2023
lvm2-libs-2.03.14-11.el8    BUILT: Thu Jul 27 18:17:12 CEST 2023

[root@grant-01 ~]# pvcreate --yes --setphysicalvolumesize 500M /dev/sdb1 /dev/nvme0n1p1 /dev/sde1
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/nvme0n1p1" successfully created.
  Physical volume "/dev/sde1" successfully created.
[root@grant-01 ~]# pvcreate --yes --setphysicalvolumesize 1G /dev/sdc1 /dev/nvme1n1p1 /dev/sdg1
  Physical volume "/dev/sdc1" successfully created.
  Physical volume "/dev/nvme1n1p1" successfully created.
  Physical volume "/dev/sdg1" successfully created.
[root@grant-01 ~]# vgcreate  raid_sanity /dev/sdc1 /dev/sdb1 /dev/nvme1n1p1 /dev/nvme0n1p1 /dev/sdg1 /dev/sde1
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  WARNING: Devices have inconsistent physical block sizes (4096 and 512).
  Volume group "raid_sanity" successfully created
[root@grant-01 ~]# pvscan
  PV /dev/sdc1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sdb1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/nvme1n1p1   VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/nvme0n1p1   VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/sdg1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sde1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  Total: 6 [4.44 GiB] / in use: 6 [4.44 GiB] / in no VG: 0 [0   ]

[root@grant-01 ~]# lvcreate --yes  --type raid1 -m 1 -n 100_percent -l100%FREE raid_sanity
  Logical volume "100_percent" created.



# RHEL9.3

kernel-5.14.0-360.el9    BUILT: Wed Aug 23 07:48:06 PM CEST 2023
lvm2-2.03.21-3.el9    BUILT: Thu Jul 13 08:50:26 PM CEST 2023
lvm2-libs-2.03.21-3.el9    BUILT: Thu Jul 13 08:50:26 PM CEST 2023

[root@grant-03 ~]# pvcreate --yes --setphysicalvolumesize 500M /dev/sdb1 /dev/nvme0n1p1 /dev/sde1
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/nvme0n1p1" successfully created.
  Physical volume "/dev/sde1" successfully created.
[root@grant-03 ~]# pvcreate --yes --setphysicalvolumesize 1G /dev/sdc1 /dev/nvme1n1p1 /dev/sdg1
  Physical volume "/dev/sdc1" successfully created.
  Physical volume "/dev/nvme1n1p1" successfully created.
  Physical volume "/dev/sdg1" successfully created.
[root@grant-03 ~]# vgcreate  raid_sanity /dev/sdc1 /dev/sdb1 /dev/nvme1n1p1 /dev/nvme0n1p1 /dev/sdg1 /dev/sde1
  Volume group "raid_sanity" successfully created
[root@grant-03 ~]# pvscan
  PV /dev/sdd2        VG rhel_grant-03   lvm2 [<446.13 GiB / 0    free]
  PV /dev/sdc1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sdb1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/nvme1n1p1   VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/nvme0n1p1   VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/sdg1        VG raid_sanity     lvm2 [1020.00 MiB / 1020.00 MiB free]
  PV /dev/sde1        VG raid_sanity     lvm2 [496.00 MiB / 496.00 MiB free]
  Total: 7 [450.57 GiB] / in use: 7 [450.57 GiB] / in no VG: 0 [0   ]
[root@grant-03 ~]# lvcreate --yes  --type raid1 -m 1 -n 100_percent -l100%FREE raid_sanity
  LV raid_sanity/100_percent_rimage_1 using PV /dev/sdg1 is not redundant.
  Insufficient suitable allocatable extents for logical volume raid_sanity/100_percent

Comment 3 RHEL Program Management 2023-09-23 19:23:09 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 4 RHEL Program Management 2023-09-23 19:24:00 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.