Bug 1059771
| Summary: | Calculating lvm thin pool snapshot space requirements | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Dave Sullivan <dsulliva> |
| Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> |
| lvm2 sub component: | Default / Unclassified | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED DUPLICATE | Docs Contact: | |
| Severity: | medium | ||
| Priority: | medium | CC: | agk, coughlan, dsulliva, dustymabe, dwysocha, heinzm, jbrassow, jkulesa, mcsontos, milos.vyletel, msnitzer, prajnoha, prockai, rmarti, slevine, thornber, zkabelac |
| Version: | 7.0 | Keywords: | Triaged |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 948001 | Environment: | |
| Last Closed: | 2014-10-29 23:34:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1119839 | ||
| Bug Blocks: | 1044717, 1113520 | ||
|
Comment 3
Marian Csontos
2014-02-04 16:20:31 UTC
Calculating the amount of space required for a snapshot depends on how much the origin (or the snapshot!) will change. If you plan to overwrite every block after every snapshot, then the amount of space you need is (1 + #snapshots)*size. If nothing is ever written, then very little space is consumed for each snapshot (although there would be no reason for a snapshot then). We cannot deliver what you are asking for. We can only do our best to educate users on how to proceed and handle failures as they come WRT this problem. There are two ways to handle out-of-space conditions: wait for more space to be added or spit-out errors. The user will have to decide the type of behavior they want for when these things happen. (Someone else needs to weigh-in if both of these options are available when space is exhausted for the thin data LV.) The behavior described in lvmthin.7 under the following sections is: * Data space exhaustion (aka thin dataLV) Writes block until more space is provided. * Metadata space exhaustion (aka thin metadataLV) Errors are returned. If you want a solution in which errors are returned when data space is exhausted, that will need to be requested. (In reply to Jonathan Earl Brassow from comment #14) > Calculating the amount of space required for a snapshot depends on how much > the origin (or the snapshot!) will change. <snip> Disclaimer: This may be oversimplifying things so please forgive me if I am wrong. What I was saying in comment #11 is that, for a read-only snapshot, the size of the read-only snapshot (the allocation % within the thin pool) does not change. You are right that the size of the pool's usage will grow when files are created/deleted in the origin but the actual blocks used by the read-only snapshot stay the same (i.e. it is the origin growing and not the snapshot). Assuming that the pool had enough space in it for the origin before you took the snapshot, then taking a read-only snapshot and increasing the size of the pool by the size of the snapshot (at the time of creation) should mean the origin still has the same amount of free space to grow into as it did before the snapshot was taken. I think what Dan was requesting in comment #12 was that there be a way to automatically increase the pool by the size of the origin at the time the snapshot is created. I'm not saying this should/shouldn't be done as there are quite a few assumptions you have to make; 1) the snapshot will be read only, 2) there is enough space within the vg to extend the pool, etc... It would also be pretty simple for the user to just extend the pool themselves immediately before taking the snapshot. > > We cannot deliver what you are asking for. We can only do our best to > educate users on how to proceed and handle failures as they come WRT this > problem. There are two ways to handle out-of-space conditions: wait for > more space to be added or spit-out errors. The user will have to decide the > type of behavior they want for when these things happen. (Someone else > needs to weigh-in if both of these options are available when space is > exhausted for the thin data LV.) The current behavior is to block, right? The original title of this bug was "lvm thin pool will hang system when full". I think hanging/blocking for a full thin pool is new behavior when it comes to LVM in general. In the past (old snapshots), if you filled up a snapshot it was no longer usable and you would get errors. Similarly, if you filled up a normal LV (by using dd) you would end up with errors too as you were trying to write past the end of the device. I guess that is the behavior I would expect when a thin pool fills up. Maybe at least make it configurable so the user can choose what they prefer (block or error). Thoughts? > The current behavior is to block, right? The original title of this bug was > "lvm thin pool will hang system when full". I think hanging/blocking for a > full thin pool is new behavior when it comes to LVM in general. In the past > (old snapshots), if you filled up a snapshot it was no longer usable and you > would get errors. Similarly, if you filled up a normal LV (by using dd) you > would end up with errors too as you were trying to write past the end of the > device. I guess that is the behavior I would expect when a thin pool fills > up. Maybe at least make it configurable so the user can choose what they > prefer (block or error). > > Thoughts? The current behavior is to block. You will never get parity of behavior with the old snapshots, I don't think. You could always write to the (fully-provisioned) origin before - regardless of whether the snapshots ran out of space and only those snapshots that ran out of space would be invalidated. If you choose to receive errors with the new snapshots, you will also receive errors from the origin when writing new blocks too. I've added bug 1119839 to address the ability to set thin volumes to error when they run out of space. If there are no other issues to discuss on this bug, perhaps I will close that one as a duplicate of this bug and change the subject of this bug. I think this is probably good enough. https://bugzilla.redhat.com/show_bug.cgi?id=1119839 I'm not sure I have a good enough understanding of things technically to articulate what I'm after. But the error of the above BZ sounds like an improvement. Thx. Ok, then I am going to close this bug as a duplicate of bug 1119839. We shall proceed with that solution. *** This bug has been marked as a duplicate of bug 1119839 *** |