Bug 1588400 - Error changing CD for a running VM when ISO image is on a block domain
Summary: Error changing CD for a running VM when ISO image is on a block domain
Keywords:
Status: CLOSED DUPLICATE of bug 1589763
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.3.5
Hardware: Unspecified
OS: Unspecified
high
high with 3 votes
Target Milestone: ovirt-4.2.8
: ---
Assignee: Tal Nisan
QA Contact: Elad
URL:
Whiteboard:
: 1621946 (view as bug list)
Depends On:
Blocks: 1589763 1660199
TreeView+ depends on / blocked
 
Reported: 2018-06-07 08:32 UTC by Gianluca Cecchi
Modified: 2024-03-25 15:05 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1589763 (view as bug list)
Environment:
Last Closed: 2018-12-30 08:49:33 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+
rule-engine: exception+


Attachments (Terms of Use)
engine.log (424.99 KB, text/plain)
2018-06-07 08:40 UTC, Gianluca Cecchi
no flags Details
file daemon.log under /var/log/ovirt-imageio-daemon/ of the host where upload happened (33.25 KB, text/plain)
2018-06-07 08:45 UTC, Gianluca Cecchi
no flags Details

Description Gianluca Cecchi 2018-06-07 08:32:08 UTC
Description of problem:

I have a CentOS 7 VM. I correctly uploaded an iSO image to the disks of a block based data storage domain (iSCSI).
Now in web admin portal I select the VM, then the 3 dots in top right, then change cd
I'm proposed with the [Eject] line and into the dropdown I see the two iso images I have (theoretically) uploaded up to now.
I select an image and then OK

I get a window with title "Operation canceled" and content
Error while executing action Change CD: Drive image file could not be found

Version-Release number of selected component (if applicable):
4.2.3.7-1.el7

How reproducible:
always

Steps to Reproduce:
1. select storage domain and upload iso
2. wait for completion and then you will see the iso listed between disks of the storage domain
3. try to attach the iso to a vm. You see the iso in dropdown list and select it

Actual results:
Error while executing action Change CD: Drive image file could not be found

Expected results:
attaching the iso to the guest and using it

Additional info:

I'm going to provide logs.
I have tried the same on the same version of oVirt but with NFS based storage domain where I upload the iso and in that environment it works as expected.
So it seems related only to block based data domains, where probably it should be created a sort of a link to an LVM structure?

Comment 1 Gianluca Cecchi 2018-06-07 08:38:30 UTC
The error in engine.log is

2018-05-30 11:58:58,772+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ChangeDiskVDSCommand] (default task-9) [d9aae17f-2a49-4d70-a909-6395d61d3ab1] Failed in 'ChangeDiskVDS' method
2018-05-30 11:58:58,775+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-9) [d9aae17f-2a49-4d70-a909-6395d61d3ab1] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ov200 command ChangeDiskVDS failed: Drive image file could not be found

See engine.log attached

Comment 2 Gianluca Cecchi 2018-06-07 08:40:23 UTC
Created attachment 1448647 [details]
engine.log

You can view at 11:58:57 the time I try to attach the iso

The upload seemed to have completed ok
2018-05-30 11:58:34,679+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
or] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [2b74b21b-7a68-45ae-9b09-1e6cc13b63ce] EVENT
_ID: TRANSFER_IMAGE_SUCCEEDED(1,032), Image Upload with disk win7_32b_eng_sp1.iso succeeded.

but probably something is wrong. See my further comment on this

Comment 3 Gianluca Cecchi 2018-06-07 08:45:21 UTC
Created attachment 1448648 [details]
file daemon.log under /var/log/ovirt-imageio-daemon/ of the host where upload happened

You can see apparently ok:
2018-05-30 11:58:19,820 INFO    (Thread-12) [images] Writing 40026112 bytes at offset 524288000 flush True to /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/71a84a1c-0c53-4bb9-9474-deb92419e955/5404add1-cac4-4129-b8f5-2e7b2fc0da86 for ticket dcacf739-241d-472b-87b7-e0653424d3e5

But if I go on the host (that is the SPM right now) I see this:

directory exists and contains a link:
[root@ov200 tmp]# ll /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/71a84a1c-0c53-4bb9-9474-deb92419e955/
total 0
lrwxrwxrwx. 1 vdsm kvm 78 May 30 11:58 5404add1-cac4-4129-b8f5-2e7b2fc0da86 -> /dev/fa33df49-b09d-4f86-9719-ede649542c21/5404add1-cac4-4129-b8f5-2e7b2fc0da86
[root@ov200 tmp]# 

But hte link points to a not existent device file:
[root@ov200 tmp]# ll /dev/fa33df49-b09d-4f86-9719-ede649542c21/5404add1-cac4-4129-b8f5-2e7b2fc0da86
ls: cannot access /dev/fa33df49-b09d-4f86-9719-ede649542c21/5404add1-cac4-4129-b8f5-2e7b2fc0da86: No such file or directory
[root@ov200 tmp]# 

So I have a link that actually point to a non existent device file...

Comment 4 Gianluca Cecchi 2018-06-07 08:48:26 UTC
Inside the directory pointed by the link I have this:

[root@ov200 tmp]# ll /dev/fa33df49-b09d-4f86-9719-ede649542c21/
total 0
lrwxrwxrwx. 1 root root 8 May 28 14:34 04548ff7-863d-4959-9f84-abf18cf38584 -> ../dm-19
lrwxrwxrwx. 1 root root 8 May 24 16:03 177ac2a5-49d4-4d01-a39a-b2e4b984f57c -> ../dm-16
lrwxrwxrwx. 1 root root 8 May 24 16:03 20daa039-9648-4998-b082-0e94dce0001b -> ../dm-15
lrwxrwxrwx. 1 root root 8 May 28 14:34 252baa4d-54b7-4f36-88e9-3a59e97cec7c -> ../dm-22
lrwxrwxrwx. 1 root root 8 May 24 16:03 3569d3ae-a366-4f33-8667-036e6874b873 -> ../dm-17
lrwxrwxrwx. 1 root root 8 May 28 14:34 58001deb-908e-4fd2-a1cf-f00b84ac600b -> ../dm-24
lrwxrwxrwx. 1 root root 8 May 28 14:34 62e3f367-545b-4608-bc8b-45ac08944acf -> ../dm-13
lrwxrwxrwx. 1 root root 8 May 28 14:34 634cbebb-0303-410f-8331-f51274d38e29 -> ../dm-23
lrwxrwxrwx. 1 root root 8 May 28 14:34 779e2720-cb5b-447a-ac7d-6264a559b26e -> ../dm-12
lrwxrwxrwx. 1 root root 8 May 28 14:34 8c44b67c-2408-454b-af43-352cdcc2f6e7 -> ../dm-20
lrwxrwxrwx. 1 root root 8 May 28 15:19 9af3574d-dc83-485f-b906-0970ad09b660 -> ../dm-25
lrwxrwxrwx. 1 root root 8 May 28 14:34 9e5a64b4-e3cc-4d4e-aa4f-35a9ae3f2f6f -> ../dm-21
lrwxrwxrwx. 1 root root 8 May 24 15:57 a3f315ab-a5c4-4366-bbe7-2fee006fb4ad -> ../dm-10
lrwxrwxrwx. 1 root root 8 May 28 15:13 aeb2e4bf-54df-4324-b1ee-283890afe4cf -> ../dm-27
lrwxrwxrwx. 1 root root 8 May 28 14:34 b36fbe5b-56e0-4f23-a9b7-cd80f7f5c0ab -> ../dm-18
lrwxrwxrwx. 1 root root 8 May 28 14:34 c0414aed-1459-48d7-ae22-5b0215c7e7de -> ../dm-14
lrwxrwxrwx. 1 root root 8 May 24 15:57 d83f5aaf-b4ae-4c20-9f05-bf9162c6cbf8 -> ../dm-11
lrwxrwxrwx. 1 root root 8 May 28 15:13 e6d44b9c-a9ae-426b-8c16-2c0f43e0faed -> ../dm-28
lrwxrwxrwx. 1 root root 8 May 28 15:13 ec1f2904-4263-4322-8ae8-35c91759f88d -> ../dm-26
lrwxrwxrwx. 1 root root 8 May 28 15:13 f772414d-a1b9-45fc-a8e9-640f62592333 -> ../dm-29
lrwxrwxrwx. 1 root root 7 May 24 15:55 ids -> ../dm-4
lrwxrwxrwx. 1 root root 7 May 24 17:16 inbox -> ../dm-8
lrwxrwxrwx. 1 root root 7 Jun  7 10:47 leases -> ../dm-5
lrwxrwxrwx. 1 root root 7 May 30 13:09 master -> ../dm-9
lrwxrwxrwx. 1 root root 7 Jun  5 16:16 metadata -> ../dm-3
lrwxrwxrwx. 1 root root 7 May 30 13:09 outbox -> ../dm-6
lrwxrwxrwx. 1 root root 7 May 24 15:55 xleases -> ../dm-7
[root@ov200 tmp]# 


[root@ov200 tmp]# ll -L /dev/fa33df49-b09d-4f86-9719-ede649542c21/
total 0
brw-rw----. 1 vdsm qemu    253, 19 May 28 14:34 04548ff7-863d-4959-9f84-abf18cf38584
brw-rw----. 1 vdsm qemu    253, 16 May 24 16:03 177ac2a5-49d4-4d01-a39a-b2e4b984f57c
brw-rw----. 1 vdsm qemu    253, 15 May 24 16:03 20daa039-9648-4998-b082-0e94dce0001b
brw-rw----. 1 vdsm qemu    253, 22 May 28 14:34 252baa4d-54b7-4f36-88e9-3a59e97cec7c
brw-rw----. 1 vdsm qemu    253, 17 May 24 16:03 3569d3ae-a366-4f33-8667-036e6874b873
brw-rw----. 1 vdsm qemu    253, 24 May 28 14:34 58001deb-908e-4fd2-a1cf-f00b84ac600b
brw-rw----. 1 vdsm qemu    253, 13 May 28 14:34 62e3f367-545b-4608-bc8b-45ac08944acf
brw-rw----. 1 vdsm qemu    253, 23 May 28 14:34 634cbebb-0303-410f-8331-f51274d38e29
brw-rw----. 1 vdsm qemu    253, 12 May 28 14:34 779e2720-cb5b-447a-ac7d-6264a559b26e
brw-rw----. 1 vdsm qemu    253, 20 May 28 14:34 8c44b67c-2408-454b-af43-352cdcc2f6e7
brw-rw----. 1 vdsm qemu    253, 25 May 28 15:19 9af3574d-dc83-485f-b906-0970ad09b660
brw-rw----. 1 vdsm qemu    253, 21 May 28 14:34 9e5a64b4-e3cc-4d4e-aa4f-35a9ae3f2f6f
brw-rw----. 1 vdsm qemu    253, 10 May 24 15:57 a3f315ab-a5c4-4366-bbe7-2fee006fb4ad
brw-rw----. 1 vdsm qemu    253, 27 May 28 15:13 aeb2e4bf-54df-4324-b1ee-283890afe4cf
brw-rw----. 1 vdsm qemu    253, 18 May 28 14:34 b36fbe5b-56e0-4f23-a9b7-cd80f7f5c0ab
brw-rw----. 1 vdsm qemu    253, 14 May 28 14:34 c0414aed-1459-48d7-ae22-5b0215c7e7de
brw-rw----. 1 vdsm qemu    253, 11 May 24 15:57 d83f5aaf-b4ae-4c20-9f05-bf9162c6cbf8
brw-rw----. 1 vdsm qemu    253, 28 May 28 15:13 e6d44b9c-a9ae-426b-8c16-2c0f43e0faed
brw-rw----. 1 vdsm qemu    253, 26 May 28 15:13 ec1f2904-4263-4322-8ae8-35c91759f88d
brw-rw----. 1 vdsm qemu    253, 29 May 28 15:13 f772414d-a1b9-45fc-a8e9-640f62592333
brw-rw----. 1 vdsm sanlock 253,  4 Jun  7 10:47 ids
brw-------. 1 vdsm qemu    253,  8 May 24 17:16 inbox
brw-rw----. 1 vdsm sanlock 253,  5 Jun  7 10:48 leases
brw-rw----. 1 root disk    253,  9 May 30 13:09 master
brw-------. 1 vdsm qemu    253,  3 Jun  5 16:16 metadata
brw-------. 1 vdsm qemu    253,  6 May 30 13:09 outbox
brw-rw----. 1 vdsm sanlock 253,  7 May 24 15:55 xleases
[root@ov200 tmp]#

Comment 5 Gianluca Cecchi 2018-06-07 14:26:47 UTC
Hello, 
I have also reproduced the same problem (always on iSCSI based storage domain) on RHV, version 4.2.3.6-0.1.el7.
I have opened the case 02115992 for that and I have uploaded the ovirt-log-collector tar.gz to the case. So you can also crosscheck with that environment data, to get the info and find a solution.
HIH,
Gianluca

Comment 6 Gianluca Cecchi 2018-06-29 14:42:08 UTC
Anything to test for this as a solution?
It seems target is 4.2.5 so updating to the just released 4.2.4 will not solve, correct?

Comment 7 Tal Nisan 2018-07-17 12:35:21 UTC
(In reply to Gianluca Cecchi from comment #6)
> Anything to test for this as a solution?
> It seems target is 4.2.5 so updating to the just released 4.2.4 will not
> solve, correct?

No, it is targeted for 4.2.6, it only occurs on block domains though

Comment 8 Gianluca Cecchi 2018-07-17 12:56:34 UTC
In my opinion the scenario where one has block domains already in place for oVirt/RHV storage domains (iSCSI or FC-SAN) is the classic one where probably he/she doesn't need/want to setup NFS shares "only" for having ISO and eventually export (even if deprecated) domains and so where this new feature would be more beneficial.

Comment 9 Robert McSwain 2018-08-18 16:28:38 UTC
This is keeping a customer from being able to roll out their RHV environment as it's stopping installation of Windows systems which need .vfd files (which no longer seem to work in a Storage Domain in the same way they historically worked in ISO domains) or the ability to swap the .ISO file from the Windows.iso to the virtio-win.iso.

The customer would be willing to update to a newer minor release if needed.

Comment 11 Tal Nisan 2018-08-27 15:09:06 UTC
*** Bug 1621946 has been marked as a duplicate of this bug. ***

Comment 12 Robert McSwain 2018-08-28 19:01:14 UTC
Any new updates on this bug, as it's holding up rollout as right now it's not apparent exactly how customers should use a standard storage domain with .vfd files in the way we used to with ISO domains.

Comment 14 Nir Soffer 2018-10-17 22:48:56 UTC
This looks like engine issue, since vdsm supports preparing cdrom using PDIV
format:

    def prepareVolumePath(self, drive, vmId=None):
        if type(drive) is dict:
            device = drive['device']
            # PDIV drive format
            # Since version 4.2 cdrom may use a PDIV format
            if device in ("cdrom", "disk") and isVdsmImage(drive):
                res = self.irs.prepareImage(
                    drive['domainID'], drive['poolID'],
                    drive['imageID'], drive['volumeID'])

So probably engine is not sending the cdrom drive in PDIV format to VM.changeCD.

Comment 15 Nir Soffer 2018-10-17 23:01:48 UTC
Engine sends a path - this incorrect for two reasons:
1. engine should not assume any path on the host, since the file system layout is
   not part of vdsm API.
2. cdrom on block device does not exist before preparing the image.

2018-05-30 11:57:04,692+02 INFO  [org.ovirt.engine.core.bll.storage.disk.ChangeDiskCommand] (default task-15) [7bb55228-dbf3-49a9-aa03-148433cca656] Running command: ChangeDiskCommand internal: false. Entities affected :  ID: 2e571c77-bae1-4c1c-bf98-effaf9fed741 Type: VMAction group CHANGE_VM_CD with role type USER
2018-05-30 11:57:04,706+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ChangeDiskVDSCommand] (default task-15) [7bb55228-dbf3-49a9-aa03-148433cca656] START, ChangeDiskVDSCommand(HostName = ov200, ChangeDiskVDSCommandParameters:{hostId='d16e723c-b44c-4c1c-be76-c67911e47ccd', vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741', iface='ide', index='2', diskPath='/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/321d8f35-854a-4415-a78a-a30b197a29a5/8df78e5b-e79b-4b29-96ae-330fec2cdc9d'}), log id: 62555af

Fix:
- Engine should send PDIV format in the request
- Update vdsm schema to document the option to use PDIV format in VM.changeCD

Comment 16 Tal Nisan 2018-12-30 08:49:33 UTC

*** This bug has been marked as a duplicate of bug 1589763 ***


Note You need to log in before you can comment on or make changes to this bug.