Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
lvm2 should better protect users by letting user know the chunksize in use is not big enough for given size of data LV and some max size of metadata.
Possibly also warn via syslog messages.
Create of a thin-pool with such chunksize is possibly good candidate for prompt confirmation as well.
Example is thin-pool of 250TB with 16GB metadata size and 256K chunksize.
Version-Release number of selected component (if applicable):
2.02.169
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
Can I get attached '-vvvv' trace for lvcreate command ?
While the lvm2 does 'warn' about to small chunk size (and once fixed should prompt) - the following 'dm' ioctl error is definitely not normal - and should have been detected earlier by lvm2 code.
Comment 6Jonathan Earl Brassow
2017-06-07 13:48:42 UTC
As for how to fix, I would suggest rejecting any command that overrides the chunksize with one that causes the condition where the entire data space is not addressable. You can then warn and prompt if the chunksize they've chosen does not allow X amount of growth.
As for the attachment in comment 5 - looks like lvm2 was 'cheated' about so device sizes ?
Was there some 'pvresize --setphysicalvolumesize' used for a PV to create 250T volume ?
From trace it does look like underlying PV devices have in total just 105GiB.
Reported ioctl error seems to suggest, mapped device on top of some PV simply cannot fit ? (aka unrelated problem to this BZ).
Yes, I did use --setphysicalvolumesize in comment #5. When I try stacking PVs on top of thin LVs, I don't see the problem.
[root@host-083 ~]# lvcreate --thinpool undersized_chunks -L 250T --chunksize 256K --poolmetadatasize 16G snapper_thinp_stack
Using default stripesize 64.00 KiB.
WARNING: Chunk size is smaller then suggested minimum size 1.00 MiB.
Logical volume "undersized_chunks" created.
There is nothing 'back and forth' here.
The logic in 2.02.171-4 - was the OLD one meant to be 'improved'.
(We provided this WARNING for a long time already, but appeared to be mostly ignored (i.e. Sky...)
Logic in 2.02.171-5 is new improved one - where we directly prohibit creation of a pool, where lvm2 knows in front - that even the 'biggest' metadata size (~16G) can't address give amount of 'data' volume for thin-pool with specified chunk-size.
What we still miss for the future enhancement is - we need to 'limit' also related other tasks like lvresize. And add new syslog warnings for existing thin-pool likely as well.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2017:2222