Bug 1459646
Summary: | Pool space leak: Shrinking snapshotted volumes retains block mappings after deleting all snapshots | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | bugzilla |
Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
lvm2 sub component: | Thin Provisioning | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED WONTFIX | Docs Contact: | |
Severity: | medium | ||
Priority: | unspecified | CC: | agk, heinzm, jbrassow, jiri.lunacek, msnitzer, nkshirsa, prajnoha, rh-bugzilla, thornber, zkabelac |
Version: | 7.3 | ||
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-11-11 21:56:23 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
bugzilla
2017-06-07 17:10:48 UTC
Note that deleting the origin frees 1gb (90g * (.1219 - .1108) == .999g) as expected given the nature of the bug: [root@hvtest1 ~]# lvs data/pool0 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert pool0 data twi-aotz-- 90.00g 12.19 5.17 [root@hvtest1 ~]# lvremove data/usage_test Do you really want to remove active logical volume data/usage_test? [y/n]: y Logical volume "usage_test" successfully removed [root@hvtest1 ~]# lvs data/pool0 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert pool0 data twi-aotz-- 90.00g 11.08 5.12 This is not a new bug - this is basically yet-to-be-resolved issue where lvm2 needs to figure some nice way how to do in configurable manner and without leaks and to long lock holdings. For now - whenever user 'lvreduce' size of thinLV - the reduced chunk is NOT 'trimmed/discard' by lvm2 (so user may 'revert' lvreduce step by vgcfgrestore in case he did a mistake - as lvm2 normally tries to support 1-command-back safe recovery) So the current 'workaround' for a user is to 'discard' this reduced chunk in-front by issuing 'blkdiscard' command. Next problem on lvm2 side is - it cannot be done atomically so we need to basically put in internal mechanism to 'queue' some work into lvm2 metadata - this is planned as we needed for other pieces. So the issue is known - but since not many users ever reduce device size - it's low prio ATM - unless there is important case in mind behind where the manual 'workaround' is not good enough. Interesting. You might add an option for lvresize like `--discard-reduction` that would blkdiscard the tail. This could work on traditional LVs as well, no reason for it not to: indeed, some users might wish to discard their shrunken LVs for SSDs and such. Since the user is invoking it, they can be aware of the potentially long resize/lock times by warning them in the option documentation. For normal 'case' i.e. linear LV - reduced/released extents can be immediately discarded by using lvm.conf issue_discard=1 option. Using an option with lvresize/lvremove might be probably another way telling system user is 100% he will not want TRIMed data back. So probaly woth something thinking. issue_discard has *NO* effect on removal or reduction of thin LVs. ATM setting 'issue_discard' only applies when a real physical extent is released back to VG - i.e. lvremove of plain linear LV releases X physical extents that are going to receive 'discard' while they are returned back to set of free extents in a VG. When there is reduced thin LV - you would need to issue 'blkdiscard --offset' to the offset the is being reduced before actual lvreduce happens. This is currently required to be made by user - otherwise the data stored in this reduce are still being hold by LV. When user extends LV back to original size - all data whould be basically still there. (so you could actually reduce data anytime later - just lvextend, blkdiscard, lvreduce) I can confirm that resizing the LV up, issuing blkdiscard --offset ... and resizing back down fixes the issue. Red Hat Enterprise Linux 7 shipped it's final minor release on September 29th, 2020. 7.9 was the last minor releases scheduled for RHEL 7. From intial triage it does not appear the remaining Bugzillas meet the inclusion criteria for Maintenance Phase 2 and will now be closed. From the RHEL life cycle page: https://access.redhat.com/support/policy/updates/errata#Maintenance_Support_2_Phase "During Maintenance Support 2 Phase for Red Hat Enterprise Linux version 7,Red Hat defined Critical and Important impact Security Advisories (RHSAs) and selected (at Red Hat discretion) Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available." If this BZ was closed in error and meets the above criteria please re-open it flag for 7.9.z, provide suitable business and technical justifications, and follow the process for Accelerated Fixes: https://source.redhat.com/groups/public/pnt-cxno/pnt_customer_experience_and_operations_wiki/support_delivery_accelerated_fix_release_handbook Feature Requests can re-opened and moved to RHEL 8 if the desired functionality is not already present in the product. Please reach out to the applicable Product Experience Engineer[0] if you have any questions or concerns. [0] https://bugzilla.redhat.com/page.cgi?id=agile_component_mapping.html&product=Red+Hat+Enterprise+Linux+7 |