RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1664461 - Assertion encountered running lvcreate command
Summary: Assertion encountered running lvcreate command
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1657726 1696575
TreeView+ depends on / blocked
 
Reported: 2019-01-08 21:31 UTC by John Mulligan
Modified: 2021-09-03 12:48 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-18 16:19:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
XZ compressed core file, generated by lvcreate (144.19 KB, application/octet-stream)
2019-04-30 14:44 UTC, John Mulligan
no flags Details

Description John Mulligan 2019-01-08 21:31:05 UTC
Description of problem:

An lvcreate command fails with an assertion rather than a typical error.

/bin/bash -c 'lvcreate --autobackup=n --poolmetadatasize 520192K --chunksize 256K --size 103809024K --thin vg_f80c8a68e60735d509a1b095a685825a/tp_5e9f9965522512c2ede27a61ab0c4f1c --virtualsize 103809024K --name brick_5e9f9965522512c2ede27a61ab0c4f1c'

Process exited with status 134 from signal ABRT
Stdout:
Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
Stderr:
WARNING: This metadata update is NOT backed up.
lvcreate: metadata/pv_map.c:198: consume_pv_area: Assertion `to_go <= pva->count' failed.




Version-Release number of selected component (if applicable):
lvm2-libs-2.02.180-10.el7_6.2.x86_64
lvm2-2.02.180-10.el7_6.2.x86_64


Actual results:
lvcreate command failed abnormally


Expected results:
Either command should report a failure condition normally or succeed, not abort.

Additional info:

LVM state of system after command failed.

[root@vp-ansible-v310-ga8-crs-1 ~]# pvs
  PV         VG                                  Fmt  Attr PSize   PFree
  /dev/sda2  rhel_dhcp46-210                     lvm2 a--  <39.00g       0
  /dev/sdd   vg_f80c8a68e60735d509a1b095a685825a lvm2 a--   99.87g  <99.38g
  /dev/sde   vg_7977025748559045ba194a73d217e2a9 lvm2 a--  199.87g <197.85g
[root@vp-ansible-v310-ga8-crs-1 ~]# vgs
  VG                                  #PV #LV #SN Attr   VSize   VFree
  rhel_dhcp46-210                       1   2   0 wz--n- <39.00g       0
  vg_7977025748559045ba194a73d217e2a9   1   2   0 wz--n- 199.87g <197.85g
  vg_f80c8a68e60735d509a1b095a685825a   1   1   0 wz--n-  99.87g  <99.38g
[root@vp-ansible-v310-ga8-crs-1 ~]# lvs
  LV                                     VG                                  Attr       LSize   Pool                                Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root                                   rhel_dhcp46-210                     -wi-ao---- <35.00g
  swap                                   rhel_dhcp46-210                     -wi-ao----   4.00g
  brick_cb656003dfa34bb4fa678e37a6d1b9ee vg_7977025748559045ba194a73d217e2a9 Vwi-aotz--   2.00g tp_cb656003dfa34bb4fa678e37a6d1b9ee        0.70
  tp_cb656003dfa34bb4fa678e37a6d1b9ee    vg_7977025748559045ba194a73d217e2a9 twi-aotz--   2.00g                                            0.70   10.32
  lvol0                                  vg_f80c8a68e60735d509a1b095a685825a -wi------- 508.00m


It is a very similar error to this very old bz #502671 but it may only be similar looking and not directly related.

Comment 4 Humble Chirammal 2019-01-24 13:54:47 UTC
As OCE QE is not able to reproduce this issue any more, I am closing this bug.

Comment 5 John Mulligan 2019-04-30 14:39:39 UTC
OCS QE has reproduced the bug. See also: https://bugzilla.redhat.com/show_bug.cgi?id=1696575

Comment 6 John Mulligan 2019-04-30 14:44:01 UTC
Created attachment 1560327 [details]
XZ compressed core file, generated by lvcreate

Comment 9 Zdenek Kabelac 2020-09-14 12:48:10 UTC
Assert is fixed upstream with this commit:

https://www.redhat.com/archives/lvm-devel/2020-September/msg00062.html

To avoid corruption to happen - it might be possible to avoid allocation of _pmspare volume with --poolmetadataspare n
(although this will make pool not so easily recoverable).

Other way is to allocated the size that fits into VG  ( data + 2  * metadata).

Comment 10 Zdenek Kabelac 2020-11-18 16:19:15 UTC
There is also stable-2.02 back-porting patch:

https://www.redhat.com/archives/lvm-devel/2020-October/msg00061.html

Closing for RH7 as there are not too many users affected by this bug to justify RH7 bug process ATM.
Upstream fixes this issue already.


Note You need to log in before you can comment on or make changes to this bug.