Bug 1846036
Summary: | LVMVDO volume does not reclaim disk space, eventually becomes read-only & fsck reports filesystem error | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Petr Beranek <pberanek> |
Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> |
lvm2 sub component: | Other | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED NOTABUG | Docs Contact: | |
Severity: | high | ||
Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, pasik, prajnoha, zkabelac |
Version: | 8.2 | Flags: | pm-rhel:
mirror+
|
Target Milestone: | rc | ||
Target Release: | 8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-06-16 13:59:33 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Petr Beranek
2020-06-10 16:01:21 UTC
Getting to the step 8. - since the filesystem is mounted without immediate discard (and this is usually recommended way) - after 'rm' it's users' responsibility to initiated trimming for release fs block - by running fstrim as step 8. (But fstrim is quite SLOW operation with VDO volumes compared to Thin volumes) So report looks like misusage of VDO volumes - but let's just add few more comments: Primary goal *IS* to avoid hitting out-of-space state. Once user is reaching 'full' pool (applies to both Thin & VDO) user has to deal with consequences. The most usable 'recovery' scenario is to extend pool to accommodate more user's data. Once the pool is out-of-space - there is no easy way to 'repair' i.e. filesystem located on such device as there are no free blocks to be written by fileystem fsck operation (with VDO situation is even worts, since even overwrite of already owned block may require new few block in pool) Unlike filesystems like btrfs/zfs - combination of ext4 & provisioned device works with two different entities. So to avoid o-o-s above - user should enable/use autoextension of VDO pool device when it writes more data then it is currently available. User cannot expect/take out-of-space of VDO device as 'similar' case to ouf-of-space filesystem - these 2 cases are very different! When the filesystem 'exhaust' ALL blocks of provisioned device (be it Thin or VDO) - it may not be able to further update even it's metadata - user is basically reaching 'dead-end' and unmounting (when possible) is required and before fsck *new space* has to be added to pool - so 'repair' can proceed and acquire new empty block in pool. Once the filesystem is 'repaired/fixed' - user can fire i.e. 'fstrim' to reclaim free blocks and return then back to the pool. (with VDO volume usage of fstrim can be a very lengthy/slow operation for big VDO pool!) Thank you, Zdenek, for clarification. This important detail, that user is responsible for discarding unused fs blocks, is missing in current lvmvdo(7) manpage and it doesn't seem to be obvious from the product itself. Our users/customers should be explicitly warn about this, or better, we should also provide recommendations how to deal with it. This risk may be mitigated by proper volume monitoring/autoextension, but anyway, we should not expect, that all our users/customers have always sufficient VDO expertise. Therefore I have opened a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1849009) related to the current lvmvdo documentation. |