Bug 1973345

Summary: Create template broken with block storage
Product: [oVirt] vdsm Reporter: Nir Soffer <nsoffer>
Component: GeneralAssignee: Nir Soffer <nsoffer>
Status: CLOSED CURRENTRELEASE QA Contact: sshmulev
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.40.60.3CC: bugs, dfodor, eshenitz, lsvaty, sfishbai
Target Milestone: ovirt-4.4.7Keywords: Regression
Target Release: ---Flags: pm-rhel: ovirt-4.4+
lsvaty: blocker+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1977232 (view as bug list) Environment:
Last Closed: 2021-07-06 07:28:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1977232    

Description Nir Soffer 2021-06-17 16:37:33 UTC
Description of problem:

Creating a template from vm when source disk is on block storage, and the
volume does not have a parent fails with:

engine.log:

VDSM host3 command HSMGetAllTasksStatusesVDS failed: value=low level Image copy failed: ('Destination volume 0195a9f2-51f6-49c8-8520-9c630cbdfd59 error: Command [\'/usr/bin/qemu-img\', \'measure\', \'--output\', \'json\', \'-O\', \'qcow2\', \'json:{"file": {"driver": "file", "filename": "/rhev/data-center/mnt/blockSD/feab3738-c158-4d48-8a41-b5a95c057a50/images/3e8214f8-7a70-4e6b-a2fe-84c7103cd8ae/2eda0099-4dbc-4602-a418-fbd1ccee3357"}, "driver": "qcow2"}\'] failed with rc=1 out=b\'\' err=b\'qemu-img: Could not open \\\'json:{"file": {"driver": "file", "filename": "/rhev/data-center/mnt/blockSD/feab3738-c158-4d48-8a41-b5a95c057a50/images/3e8214f8-7a70-4e6b-a2fe-84c7103cd8ae/2eda0099-4dbc-4602-a418-fbd1ccee3357"}, "driver": "qcow2"}\\\': \\\'file\\\' driver requires \\\'/rhev/data-center/mnt/blockSD/feab3738-c158-4d48-8a41-b5a95c057a50/images/3e8214f8-7a70-4e6b-a2fe-84c7103cd8ae/2eda0099-4dbc-4602-a418-fbd1ccee3357\\\' to be a regular file\\n\'',) abortedcode=261

vdsm.log:

2021-06-17 03:14:17,571+0300 INFO  (tasks/1) [storage.ThreadPool.WorkerThread] START task 62624777-82b1-4d25-8db7-1bffbf34f
f90 (cmd=<bound method Task.commit of <vdsm.storage.task.Task object at 0x7fa62aee1f60>>, args=None) (threadPool:146)
2021-06-17 03:14:17,925+0300 INFO  (tasks/1) [storage.Image] sdUUID=feab3738-c158-4d48-8a41-b5a95c057a50 vmUUID= srcImgUUID
=3e8214f8-7a70-4e6b-a2fe-84c7103cd8ae srcVolUUID=2eda0099-4dbc-4602-a418-fbd1ccee3357 dstImgUUID=77eec8ed-7dce-4250-8575-42
5d661e64e6 dstVolUUID=0195a9f2-51f6-49c8-8520-9c630cbdfd59 dstSdUUID=feab3738-c158-4d48-8a41-b5a95c057a50 volType=6 volForm
at=COW preallocate=SPARSE force=False postZero=False discard=True (image:635)
2021-06-17 03:14:17,926+0300 INFO  (tasks/1) [storage.VolumeManifest] Volume: preparing volume feab3738-c158-4d48-8a41-b5a9
5c057a50/2eda0099-4dbc-4602-a418-fbd1ccee3357 (volume:599)
2021-06-17 03:14:17,934+0300 INFO  (tasks/1) [storage.LVM] Activating lvs: vg=feab3738-c158-4d48-8a41-b5a95c057a50 lvs=['2e
da0099-4dbc-4602-a418-fbd1ccee3357'] (lvm:1755)
2021-06-17 03:14:18,298+0300 ERROR (tasks/1) [storage.Image] Unexpected error (image:729)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/image.py", line 673, in copyCollapsed
    sdUUID, volParams, dstSdUUID, dstVolFormat)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/image.py", line 850, in calculate_vol_alloc
    return self.estimate_qcow2_size(src_vol_params, dst_sd_id)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/image.py", line 117, in estimate_qcow2_size
    output_format=qemuimg.FORMAT.QCOW2)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 156, in measure
    out = _run_cmd(cmd)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 578, in _run_cmd
    raise cmdutils.Error(cmd, rc, out, err)
vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'measure', '--output', 'json', '-O', 'qcow2', 'json:{"file": {"driver": "file", "filename": "/rhev/data-center/mnt/blockSD/feab3738-c158-4d48-8a41-b5a95c057a50/images/3e8214f8-7a70-4e6b-a2fe-84c7103cd8ae/2eda0099-4dbc-4602-a418-fbd1ccee3357"}, "driver": "qcow2"}'] failed with rc=1 out=b'' err=b'qemu-img: Could not open \'json:{"file": {"driver": "file", "filename": "/rhev/data-center/mnt/blockSD/feab3738-c158-4d48-8a41-b5a95c057a50/images/3e8214f8-7a70-4e6b-a2fe-84c7103cd8ae/2eda0099-4dbc-4602-a418-fbd1ccee3357"}, "driver": "qcow2"}\': \'file\' driver requires \'/rhev/data-center/mnt/blockSD/feab3738-c158-4d48-8a41-b5a95c057a50/images/3e8214f8-7a70-4e6b-a2fe-84c7103cd8ae/2eda0099-4dbc-4602-a418-fbd1ccee3357\' to be a regular file\n'

The issues is using:

    'json:{"file": {"driver": "file", "filename": "/path"}, "driver": "qcow2"}'

When /path is block storage. It should be:

    'json:{"file": {"driver": "host_device", "filename": "/path"}, "driver": "qcow2"}'

The root cause is not calling qemuimg.measure() without is_block=True.

This is a regression introduce in:
$ git describe 1718b8784e841405574c44abe2357997e3235723
v4.40.32-4-g1718b8784

Before this change, qemu-img measure was called with a path instead of json: uri,
and qemu-img detected the right driver.

Version-Release number of selected component (if applicable):
vdsm-4.40.70.4

How reproducible:
100$

Steps to Reproduce:
1. Create think vm on block storage (iSCSI/FC)
   The vm disk must not have a parent.
2. Create template from the VM using qcow2 format

The operation fails.

Comment 1 RHEL Program Management 2021-06-18 11:27:44 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 2 Nir Soffer 2021-06-19 15:43:38 UTC
Should be available in vdsm 4.40.70.5.

Comment 7 Nir Soffer 2021-06-29 11:26:19 UTC
Reproduced on RHEL 8.5 nightly with:
# rpm -q qemu-img
qemu-img-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64

With the fix creating template works again.

Comment 8 Sandro Bonazzola 2021-07-06 07:28:16 UTC
This bugzilla is included in oVirt 4.4.7 release, published on July 6th 2021.

Since the problem described in this bug report should be resolved in oVirt 4.4.7 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.