Bug 2170689
Summary: | Extension of preallocated COW disks is broken | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Germano Veit Michel <gveitmic> | |
Component: | vdsm | Assignee: | Albert Esteve <aesteve> | |
Status: | CLOSED ERRATA | QA Contact: | Shir Fishbain <sfishbai> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 4.5.3 | CC: | aesteve, ahadas, aperotti, bcholler, emarcus, lsurette, michal.skrivanek, sbonazzo, sfishbai, srevivo, ycui | |
Target Milestone: | ovirt-4.5.3-async | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
Previously, when requesting an extension on a cow format disk, the Logical Volume was not resized. As a result, the Virtual Size on the extended preallocated cow disk was greater than the Actual size.
In this release, Actual Size and Virtual Size are equal after the extension of cow preallocated volumes.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2210036 (view as bug list) | Environment: | ||
Last Closed: | 2023-03-28 19:48:38 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2210036 |
Description
Germano Veit Michel
2023-02-16 22:40:17 UTC
yeah, we could argue it's a feature request to extend disks on block storage for which incremental backup is enabled but we now enable incremental backup for all disks (from the UI).. Albert, isn't it just a matter of changing the assumption that for QCOW volumes we don't need to extend the LVs so lvm.extendLV would be called for QCOW+preallocated on block storage? Yes, please consider fixing this, and not as an RFE. It was picked up by the discrepancy tool actually, but one of the main issues is that it can make VMs with presumably preallocated disks pause, as those disks now new extensions to reach their full size. There is a customer behind this, Bimal could you please attach your case here? Yes, iinm this is just a matter of relaxing the condition in the `extendSize` method. I was able to reproduce the issue in a test and posted a PR with the fix. I will verify the fix with my local setup too. That and an OST run should give us some fail-safe in case there is any non-expected side effect. This bug has low overall severity and is not going to be further verified by QE. missing a backport to the 4.5.3.z branch > one of the main issues is that it can make VMs with presumably preallocated disks pause, as those disks now new extensions to reach their full size
We discussed this, and it should not be a big issue, as we extend fast and early. If the VM writes beyond the LV size but within its max capacity,
we get a block threshold event from libvirt, and the disk is extended by one chunk. Same as for thin disks.
Regardless, we reworked the skip condition so that all preallocated volumes do get extended, to match user expectations. Therefore, this should not happen anymore.
(In reply to Albert Esteve from comment #7) > > one of the main issues is that it can make VMs with presumably preallocated disks pause, as those disks now new extensions to reach their full size > > We discussed this, and it should not be a big issue, as we extend fast and > early. If the VM writes beyond the LV size but within its max capacity, > we get a block threshold event from libvirt, and the disk is extended by one > chunk. Same as for thin disks. Indeed, not a big issue in the latest versions with the improve extension mechanism. > Regardless, we reworked the skip condition so that all preallocated volumes > do get extended, to match user expectations. Therefore, this should not > happen anymore. Thank you. It's good to keep preallocated disks as preallocated. Verified LV size and capacity size are the same after extension. # lvs 30ccf7d1-884b-4cb3-9062-35373cbfc2ec/3d7c0233-e12d-418d-923f-5d9567453352 --devicesfile "" LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 3d7c0233-e12d-418d-923f-5d9567453352 30ccf7d1-884b-4cb3-9062-35373cbfc2ec -wi-XX--X- 15.00g # lvdisplay 30ccf7d1-884b-4cb3-9062-35373cbfc2ec/3d7c0233-e12d-418d-923f-5d9567453352 --devicesfile "" --- Logical volume --- LV Path /dev/30ccf7d1-884b-4cb3-9062-35373cbfc2ec/3d7c0233-e12d-418d-923f-5d9567453352 LV Name 3d7c0233-e12d-418d-923f-5d9567453352 VG Name 30ccf7d1-884b-4cb3-9062-35373cbfc2ec LV UUID AR413I-5Ml3-rOsi-p8BN-QDrw-p0TS-EnSLck LV Write Access read/write LV Status available # open 1 LV Size 15.00 GiB Current LE 120 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:40 version: vdsm-4.50.3.7-1.el8ev.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (vdsm bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1501 |