Bug 1707934
| Summary: | [downstream clone - 4.2.10] [downstream clone - 4.3.4] Moving disk results in wrong SIZE/CAP key in the volume metadata | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | RHV bug bot <rhv-bugzilla-bot> |
| Component: | vdsm | Assignee: | Vojtech Juranek <vjuranek> |
| Status: | CLOSED ERRATA | QA Contact: | Yosi Ben Shimon <ybenshim> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 4.2.8 | CC: | aefrat, aoconnor, bcholler, dfodor, eshenitz, jinjli, lsurette, mkalinin, nsoffer, pvilayat, Rhev-m-bugs, rhodain, royoung, srevivo, tnisan, ycui |
| Target Milestone: | ovirt-4.2.10 | Keywords: | ZStream |
| Target Release: | --- | Flags: | lsvaty:
testing_plan_complete-
|
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1707932 | Environment: | |
| Last Closed: | 2019-05-23 11:31:44 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1700623, 1707932 | ||
| Bug Blocks: | |||
|
Description
RHV bug bot
2019-05-08 17:36:51 UTC
Note: this was block storage to block storage (Originally by Germano Veit Michel) (Originally by rhv-bugzilla-bot) The create volume command on DST SD looks right, not yet sure why the metadata is wrong. 2019-04-17 09:58:20,359+1000 INFO (jsonrpc/2) [vdsm.api] START createVolume(sdUUID=u'43c67df7-2293-4756-9aa3-de09d67d7050', spUUID=u'da42e5a5-f6f7-49b4-8256-2adf690ddf4c', imgUUID=u'b9fd9e73-32d3-473a-8cb5-d113602f76e1', size=u'10737418240', volFormat=4, preallocate=2, diskType=u'DATA', volUUID=u'359c2ea7-0a73-4296-8109-b799d9bfbd08', desc=None, srcImgUUID=u'b9fd9e73-32d3-473a-8cb5-d113602f76e1', srcVolUUID=u'5f478dfb-78bb-4217-ad63-6927dab7cc90', initialSize=u'976128931') from=::ffff:10.64.24.161,49332, flow_id=23cc02dc-502c-4d33-9271-3f5b6b89a69a, task_id=c2e90abb-fa9c-415d-b9f7-e9d13520971d (api:46) (Originally by Germano Veit Michel) (Originally by rhv-bugzilla-bot) The issue is this line in volume.py:
1148 # Override the size with the size of the parent
1149 size = volParent.getSize()
When creating a volume with a parent volume, vdsm override the size sent
by engine silently.
The code was added in
commit 8a0236a2fdf4e81f9b73e9279606053797e14753
Author: Federico Simoncelli <fsimonce>
Date: Tue Apr 17 18:33:51 2012 +0000
Unify the volume creation code in volume.create
This patch lays out the principles of the create volume flow (unified
both for block and file storage domains).
Signed-off-by: Federico Simoncelli <fsimonce>
Change-Id: I0e44da32351a420f0536505985586b24ded81a2a
Reviewed-on: http://gerrit.ovirt.org/3627
Reviewed-by: Allon Mureinik <amureini>
Reviewed-by: Ayal Baron <abaron>
The review does not exist on gerrit, and there is no info explaning why vdsm
need to override the size sent by engine silently and use the parent size.
Maybe this was needed in the past to work around some engine bug or issue in
another vdsm flow.
So it seems that creating a volume chain with different sizes was always broken.
I think we need to:
- remove this override
- check if removing it breaks some other flow - may break snapshot creation if engine
send the wrong size, maybe this code "fixes" such case.
- verify metadata size when preparing existing volume, and fix inconsistencies between
qcow2 virtual size and volume size
(Originally by Nir Soffer)
(Originally by rhv-bugzilla-bot)
We have 2 patches in review: - https://gerrit.ovirt.org/c/99539/ - this fixes the root cause, creating volumes with bad metadata. - https://gerrit.ovirt.org/c/99541 - this currently fail to prepare a volume with bad metadata, so it would prevent corruption of the image when creating a snapshot, but it will fail starting a VM or moving a disk with such volume. I think we can fix bad metadata when preparing a volume, since we already do this for special zero metadata size. Both patches are small and simple and backport to 4.2 should be possible. When this will be fixed upstream we can evaluate backport to 4.2. (Originally by Nir Soffer) (Originally by rhv-bugzilla-bot) Removing master and 4.3 patches, only 4.2 patches should be attached here. Tested using:
ovirt-engine-4.2.8.7-0.1.el7ev.noarch
Tried according to the steps in the description with:
1. iscsi -> iscsi (move disk to other domain - same type)
2. iscsi -> nfs (move disk to other domain - other type)
No failures and/or corruptions found
The volume capacity was as expected after extending it.
For example (2G in this case),
vdsm-client Volume getInfo storagepoolID=d51f09b5-3534-4fc5-bbeb-796172274255 storagedomainID=5e27bc90-38ba-417e-bcc7-e019223d5127 imageID=b2375de8-1b11-4f96-939e-837f5181cb8d volumeID=6127ad83-fb18-4459-a000-a2a8adf1e610
{
"status": "OK",
"lease": {
"path": "/dev/5e27bc90-38ba-417e-bcc7-e019223d5127/leases",
"owners": [],
"version": null,
"offset": 113246208
},
"domain": "5e27bc90-38ba-417e-bcc7-e019223d5127",
"capacity": "2147483648",
"voltype": "LEAF",
"description": "None",
"parent": "d10ef7b4-af9b-4f1a-bad7-2385a2ea1824",
"format": "COW",
"generation": 1,
"image": "b2375de8-1b11-4f96-939e-837f5181cb8d",
"uuid": "6127ad83-fb18-4459-a000-a2a8adf1e610",
"disktype": "DATA",
"legality": "LEGAL",
"mtime": "0",
"apparentsize": "1073741824",
"truesize": "1073741824",
"type": "SPARSE",
"children": [],
"pool": "",
"ctime": "1558528919"
}
Moving to VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1261 sync2jira sync2jira |