Bug 2192607
| Summary: | uploading ISO to data domain FC / NFS fails to correctly upload the disk. | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Ulhas Surse <usurse> |
| Component: | ovirt-imageio | Assignee: | Arik <ahadas> |
| Status: | CLOSED WORKSFORME | QA Contact: | Shir Fishbain <sfishbai> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.5.3 | CC: | ahadas, michal.skrivanek |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-07-04 22:49:45 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Ulhas Surse
2023-05-02 13:15:26 UTC
Arik, any idea? also, logs? (In reply to Michal Skrivanek from comment #2) > Arik, any idea? nope, need to get logs I didn't manage to reproduce this issue so I'll describe how the process looked like for me:
1. I've uploaded an ISO of Fedora 38 server to a Fibre Channel data domain
2. After the upload, the link on a different host (not the host that was used for the upload) is broken:
# ls -l /rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a
lrwxrwxrwx. 1 vdsm kvm 78 Jul 4 23:54 /rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a -> /dev/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a
# file /rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a
/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a: broken symbolic link to /dev/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a
3. I've started a VM that has a blank disk and that ISO as bootable, in the VDSM log I see that the volume is activated:
2023-07-05 01:30:30,789+0300 INFO (vm/34dd1fbf) [storage.lvm] Activating lvs: vg=b3e5bc8b-e23c-47f6-b95c-1211716e4a48 lvs=['d5f54f7c-2c1f-481f-a04e-ca4f5626e76a'] (lvm:1839)
2023-07-05 01:30:30,891+0300 INFO (vm/34dd1fbf) [storage.storagedomain] Creating image run directory '/run/vdsm/storage/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4' (blockSD:1349)
2023-07-05 01:30:30,891+0300 INFO (vm/34dd1fbf) [storage.fileutils] Creating directory: /run/vdsm/storage/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4 mode: None (fileUtils:214)
2023-07-05 01:30:30,892+0300 INFO (vm/34dd1fbf) [storage.storagedomain] Creating symlink from /dev/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a to /run/vdsm/storage/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/ef8aa3
a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a (blockSD:1354)
2023-07-05 01:30:30,980+0300 DEBUG (vm/34dd1fbf) [storage.misc.exccmd] /usr/bin/taskset --cpu-list 0-7 /usr/bin/dd iflag=direct skip=2192 bs=512 if=/dev/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/metadata count=1 (cwd None) (commands:217)
2023-07-05 01:30:30,992+0300 DEBUG (vm/34dd1fbf) [storage.misc.exccmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000297563 s, 1.7 MB/s\n'; <rc> = 0 (commands:230)
2023-07-05 01:30:30,992+0300 DEBUG (vm/34dd1fbf) [storage.misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000297563 s, 1.7 MB/s'], size: 512 (misc:95)
2023-07-05 01:30:30,993+0300 DEBUG (vm/34dd1fbf) [storage.misc.exccmd] /usr/bin/taskset --cpu-list 0-7 /usr/bin/dd iflag=direct skip=2192 bs=512 if=/dev/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/metadata count=1 (cwd None) (commands:217)
2023-07-05 01:30:31,003+0300 DEBUG (vm/34dd1fbf) [storage.misc.exccmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000217565 s, 2.4 MB/s\n'; <rc> = 0 (commands:230)
2023-07-05 01:30:31,003+0300 DEBUG (vm/34dd1fbf) [storage.misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000217565 s, 2.4 MB/s'], size: 512 (misc:95)
2023-07-05 01:30:31,017+0300 INFO (vm/34dd1fbf) [storage.storagedomain] Creating symlink from /run/vdsm/storage/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4 to /rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6
-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4 (blockSD:1319)
2023-07-05 01:30:31,017+0300 DEBUG (vm/34dd1fbf) [storage.storagedomain] path to image directory already exists: /rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4 (blockSD:1325)
2023-07-05 01:30:31,018+0300 INFO (vm/34dd1fbf) [vdsm.api] FINISH prepareImage return={'path': '/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f56
26e76a', 'info': {'type': 'block', 'path': '/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a'}, 'imgVolumesInfo': [{'domainID': 'b3e5bc8b-e
23c-47f6-b95c-1211716e4a48', 'imageID': 'ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4', 'volumeID': 'd5f54f7c-2c1f-481f-a04e-ca4f5626e76a', 'path': '/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188
-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a', 'leasePath': '/dev/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/leases', 'leaseOffset': 114294784}]} from=internal, task_id=c1ad3634-27b4-4568-b79d-216274017a80 (api:37)
2023-07-05 01:30:31,018+0300 DEBUG (vm/34dd1fbf) [storage.taskmanager.task] (Task='c1ad3634-27b4-4568-b79d-216274017a80') finished: {'path': '/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b18
8-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a', 'info': {'type': 'block', 'path': '/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a'}
, 'imgVolumesInfo': [{'domainID': 'b3e5bc8b-e23c-47f6-b95c-1211716e4a48', 'imageID': 'ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4', 'volumeID': 'd5f54f7c-2c1f-481f-a04e-ca4f5626e76a', 'path': '/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95
c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a', 'leasePath': '/dev/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/leases', 'leaseOffset': 114294784}]} (task:1185)
2023-07-05 01:30:31,018+0300 DEBUG (vm/34dd1fbf) [storage.taskmanager.task] (Task='c1ad3634-27b4-4568-b79d-216274017a80') moving from state finished -> state finished (task:607)
2023-07-05 01:30:31,018+0300 DEBUG (vm/34dd1fbf) [storage.resourcemanager] Owner.releaseAll resources %s (resourceManager:720)
2023-07-05 01:30:31,018+0300 DEBUG (vm/34dd1fbf) [storage.resourcemanager] Trying to release resource '00_storage.b3e5bc8b-e23c-47f6-b95c-1211716e4a48' (resourceManager:529)
2023-07-05 01:30:31,018+0300 DEBUG (vm/34dd1fbf) [storage.resourcemanager] Released resource '00_storage.b3e5bc8b-e23c-47f6-b95c-1211716e4a48' (0 active users) (resourceManager:547)
2023-07-05 01:30:31,018+0300 DEBUG (vm/34dd1fbf) [storage.resourcemanager] Resource '00_storage.b3e5bc8b-e23c-47f6-b95c-1211716e4a48' is free, finding out if anyone is waiting for it. (resourceManager:553)
2023-07-05 01:30:31,018+0300 DEBUG (vm/34dd1fbf) [storage.resourcemanager] No one is waiting for resource '00_storage.b3e5bc8b-e23c-47f6-b95c-1211716e4a48', Clearing records. (resourceManager:561)
2023-07-05 01:30:31,018+0300 DEBUG (vm/34dd1fbf) [storage.taskmanager.task] (Task='c1ad3634-27b4-4568-b79d-216274017a80') ref 0 aborting False (task:983)
2023-07-05 01:30:31,018+0300 INFO (vm/34dd1fbf) [vds] prepared volume path: /rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a (clientIF:506
)
4. The VM starts with:
<disk device="cdrom" snapshot="no" type="block">
<driver error_policy="report" name="qemu" type="raw"/>
<source dev="/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a">
<seclabel model="dac" relabel="no" type="none"/>
</source>
<target bus="sata" dev="sdc"/>
<readonly/>
<alias name="ua-e27778eb-586e-403d-b7cc-841a5ea5618d"/>
<address bus="0" controller="0" target="0" type="drive" unit="2"/>
<boot order="1"/>
</disk>
<disk device="disk" snapshot="no" type="file">
<target bus="scsi" dev="sda"/>
<source file="/rhev/data-center/mnt/mantis-nfs-lif2.lab.eng.tlv2.redhat.com:_nas01_ge__12__nfs__1/01264f1b-be43-456b-ac48-e2be7273a132/images/4d6c4eba-b124-42c8-9b01-cc28f75908cd/e71cb33e-ba2c-4710-a6f9-84fb244d0c87">
<seclabel model="dac" relabel="no" type="none"/>
</source>
<driver cache="none" error_policy="stop" io="threads" name="qemu" type="qcow2"/>
<alias name="ua-4d6c4eba-b124-42c8-9b01-cc28f75908cd"/>
<address bus="0" controller="0" target="0" type="drive" unit="0"/>
<serial>4d6c4eba-b124-42c8-9b01-cc28f75908cd</serial>
</disk>
5. When I open a console to the VM, the installation of Fedora 38 appears
6. Checking the volume again shows the link is not broken:
# file /rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a
/rhev/data-center/mnt/blockSD/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/images/ef8aa3a4-43aa-4c1a-b188-5b60a85cbbc4/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a: symbolic link to /dev/b3e5bc8b-e23c-47f6-b95c-1211716e4a48/d5f54f7c-2c1f-481f-a04e-ca4f5626e76a
I can't tell what led to the "Boot failed: Could not read from CDROM (code 0005)" error that is mentioned on the description - unfortunately, I didn't find a VM that started with the volume 3f4db0a4-6ceb-4735-8a21-b27892a3dce2 in the log to see that it was also activated this way, but I suppose the ISO was activated just fine (and anyway, the description says it also happens on NFS..) so I'd check the ISO itself and the uploaded volume (with qemu-img) to rule out an issue like the one that was discussed in https://lists.ovirt.org/archives/list/users@ovirt.org/thread/GBCBZ2EP7HSNWGR5OJXBSGWGYANO3X2D/
|