Bug 1570368
Summary: | GET diskattachments for a vm is missing a logical_name of snapshot disks (that are attached to another vm) | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Natalie Gavrielov <ngavrilo> | ||||||
Component: | RestAPI | Assignee: | Liran Rotenberg <lrotenbe> | ||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | sshmulev | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | 4.2.2.6 | CC: | aefrat, ahadas, bugs, frolland, michal.skrivanek, ngavrilo, sshmulev, tnisan | ||||||
Target Milestone: | ovirt-4.4.8 | Keywords: | Automation | ||||||
Target Release: | --- | Flags: | pm-rhel:
ovirt-4.5?
|
||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2021-09-12 15:43:36 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Natalie Gavrielov
2018-04-22 11:51:36 UTC
Michal, why is logical name issues a storage bug? Natalie, do you have guest tools installed on the guest? Did you let the guest run for a couple of minutes before checking? (In reply to Tal Nisan from comment #1) > Michal, why is logical name issues a storage bug? Well, https://ovirt.org/develop/release-management/features/storage/reportguestdiskslogicaldevicename/ is primarily a storage feature. If you see anything in guest agent to fix virt can help there, but it doesn't look like that. (In reply to Tal Nisan from comment #1) > > Natalie, do you have guest tools installed on the guest? Did you let the > guest run for a couple of minutes before checking? Yes, it's installed (and running). We wait until the guest reports IP address, only then continue with the test. I think it was enough time because it did report the other 2 disk attachment logical names, just not this one. Natalie, if the guest agent reports the logical name it should appear in the database, can you see it in the vm_device table? (In reply to Tal Nisan from comment #4) > Natalie, if the guest agent reports the logical name it should appear in the > database, can you see it in the vm_device table? From the db: engine=# select * from vm_device where vm_id='84128620-a285-4ee0-8cac-dc757ef7eda1' and type='disk' and device='disk'; -[ RECORD 1 ]-----+------------------------------------------------------------- device_id | c91c8391-ca4d-4fc9-8790-cd2918326d06 vm_id | 84128620-a285-4ee0-8cac-dc757ef7eda1 type | disk device | disk address | {type=pci, slot=0x09, bus=0x00, domain=0x0000, function=0x0} spec_params | { } is_managed | t is_plugged | t is_readonly | f _create_date | 2018-04-22 10:50:32.074316+03 _update_date | 2018-04-22 10:53:04.423711+03 alias | ua-c91c8391-ca4d-4fc9-8790-cd2918326d06 custom_properties | { } snapshot_id | 964fd208-c591-457b-b5a4-720e09915f8b logical_name | host_device | -[ RECORD 2 ]-----+------------------------------------------------------------- device_id | 11c7e386-2e41-40f2-9239-c59f8f03ebf4 vm_id | 84128620-a285-4ee0-8cac-dc757ef7eda1 type | disk device | disk address | {type=pci, slot=0x08, bus=0x00, domain=0x0000, function=0x0} spec_params | { } is_managed | t is_plugged | t is_readonly | f _create_date | 2018-04-22 10:50:33.650143+03 _update_date | 2018-04-22 10:53:04.423711+03 alias | ua-11c7e386-2e41-40f2-9239-c59f8f03ebf4 custom_properties | { } snapshot_id | logical_name | /dev/vdb host_device | -[ RECORD 3 ]-----+------------------------------------------------------------- device_id | 2c94094c-a650-416c-86c8-e4f079ad0946 vm_id | 84128620-a285-4ee0-8cac-dc757ef7eda1 type | disk device | disk address | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0} spec_params | { } is_managed | t is_plugged | t is_readonly | f _create_date | 2018-04-22 10:48:38.577049+03 _update_date | 2018-04-22 10:53:04.423711+03 alias | ua-2c94094c-a650-416c-86c8-e4f079ad0946 custom_properties | { } snapshot_id | logical_name | /dev/vda host_device | From rest: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <disk_attachments> <disk_attachment href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1/diskattachments/2c94094c-a650-416c-86c8-e4f079ad0946" id="2c94094c-a650-416c-86c8-e4f079ad0946"> <active>true</active> <bootable>true</bootable> <interface>virtio</interface> <logical_name>/dev/vda</logical_name> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk href="/ovirt-engine/api/disks/2c94094c-a650-416c-86c8-e4f079ad0946" id="2c94094c-a650-416c-86c8-e4f079ad0946" /> <vm href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1" id="84128620-a285-4ee0-8cac-dc757ef7eda1" /> </disk_attachment> <disk_attachment href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1/diskattachments/c91c8391-ca4d-4fc9-8790-cd2918326d06" id="c91c8391-ca4d-4fc9-8790-cd2918326d06"> <active>true</active> <bootable>false</bootable> <interface>virtio</interface> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk href="/ovirt-engine/api/disks/c91c8391-ca4d-4fc9-8790-cd2918326d06" id="c91c8391-ca4d-4fc9-8790-cd2918326d06" /> <vm href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1" id="84128620-a285-4ee0-8cac-dc757ef7eda1" /> </disk_attachment> <disk_attachment href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1/diskattachments/11c7e386-2e41-40f2-9239-c59f8f03ebf4" id="11c7e386-2e41-40f2-9239-c59f8f03ebf4"> <active>true</active> <bootable>false</bootable> <interface>virtio</interface> <logical_name>/dev/vdb</logical_name> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk href="/ovirt-engine/api/disks/11c7e386-2e41-40f2-9239-c59f8f03ebf4" id="11c7e386-2e41-40f2-9239-c59f8f03ebf4" /> <vm href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1" id="84128620-a285-4ee0-8cac-dc757ef7eda1" /> </disk_attachment> </disk_attachments> The logical name for disk attachment c91c8391-ca4d-4fc9-8790-cd2918326d06, snapshot id: 964fd208-c591-457b-b5a4-720e09915f8b is missing in both rest and db. Michal, the logical name is not updated in the database so it doesn't reach Engine at all so in that case I reckon it is guest tools related? (In reply to Tal Nisan from comment #6) > Michal, the logical name is not updated in the database so it doesn't reach > Engine at all so in that case I reckon it is guest tools related? The field is generic, so indeed you can assume it was not reported in the first place. That's easy to check in guest agent logs - please attach. Also run /usr/share/ovirt-guest-agent/diskmapper ans past the output. But I can't tell if it is an issue or not. Is it supposed to be reported? E.g. the mapping doesn't work for LUNs because they do not have a serial number Created attachment 1426707 [details]
ovirt-guest-agent.log
[root@vm-83-58 ~]# /usr/share/ovirt-guest-agent/diskmapper
/dev/sr0|QEMU_DVD-ROM_QM00003
/dev/vda|2c94094c-a650-416c-8
/dev/vdb|11c7e386-2e41-40f2-9
/dev/vdc|c91c8391-ca4d-4fc9-8
[root@vm-83-58 ~]#
hm, the log is not much helpful, but the diskmapper output is! Seems it's reported fine. The actual mapping happens in vdsm - but there doesnt' seem to be anything in the original logs. Natalie, can you please give access to the guest and host to confirm it's really mapped correctly in vdsm? Direct LUN's logical name is also missing from VM's diskattachments collection. https://storage-ge-02.scl.lab.tlv.redhat.com/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b/diskattachments <disk_attachments> <disk_attachment href="/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b/diskattachments/3ee682b8-f89d-496c-b46b-03127e067e76" id="3ee682b8-f89d-496c-b46b-03127e067e76"> <active>true</active> <bootable>true</bootable> <interface>virtio</interface> <logical_name>/dev/vda</logical_name> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk href="/ovirt-engine/api/disks/3ee682b8-f89d-496c-b46b-03127e067e76" id="3ee682b8-f89d-496c-b46b-03127e067e76"/> <vm href="/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b" id="2302a8a2-5736-4734-b99b-49e1e7c6ac9b"/> </disk_attachment> <disk_attachment href="/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b/diskattachments/23a8e6b6-b661-463d-8c20-7fbb9280bd67" id="23a8e6b6-b661-463d-8c20-7fbb9280bd67"> <active>true</active> <bootable>false</bootable> <interface>virtio</interface> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk href="/ovirt-engine/api/disks/23a8e6b6-b661-463d-8c20-7fbb9280bd67" id="23a8e6b6-b661-463d-8c20-7fbb9280bd67"/> <vm href="/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b" id="2302a8a2-5736-4734-b99b-49e1e7c6ac9b"/> </disk_attachment> </disk_attachments> /dev/vdb is missing from vm_devicde table for that VM: engine=# select * from vm_device where vm_id='2302a8a2-5736-4734-b99b-49e1e7c6ac9b' and type='disk' and device='disk'; -[ RECORD 1 ]-----+------------------------------------------------------------- device_id | 23a8e6b6-b661-463d-8c20-7fbb9280bd67 vm_id | 2302a8a2-5736-4734-b99b-49e1e7c6ac9b type | disk device | disk address | {type=pci, slot=0x0a, bus=0x00, domain=0x0000, function=0x0} spec_params | { } is_managed | t is_plugged | t is_readonly | f _create_date | 2018-05-03 11:59:59.653187+03 _update_date | 2018-05-03 12:07:26.109184+03 alias | virtio-disk1 custom_properties | { } snapshot_id | logical_name | host_device | -[ RECORD 2 ]-----+------------------------------------------------------------- device_id | 3ee682b8-f89d-496c-b46b-03127e067e76 vm_id | 2302a8a2-5736-4734-b99b-49e1e7c6ac9b type | disk device | disk address | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0} spec_params | { } is_managed | t is_plugged | t is_readonly | f _create_date | 2018-05-03 11:59:35.383759+03 _update_date | 2018-05-03 12:07:26.109184+03 alias | ua-3ee682b8-f89d-496c-b46b-03127e067e76 custom_properties | { } snapshot_id | logical_name | /dev/vda host_device | But, it seems that the guest agent is not aware of this disk at all. Notice vdb in lsblk output: [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 10G 0 disk ├─vda1 252:1 0 700M 0 part /boot ├─vda2 252:2 0 1G 0 part [SWAP] └─vda3 252:3 0 8.3G 0 part └─VolGroup01-root 253:0 0 8.3G 0 lvm / vdb 252:16 0 50G 0 disk And it is missing from diskmapper output: [root@localhost ~]# /usr/share/ovirt-guest-agent/diskmapper /dev/sr0|QEMU_DVD-ROM_QM00003 /dev/vda|3ee682b8-f89d-496c-b Michal - Does this mean we have a bug in the guest agent? This prevents us from testing the full backup restore api flow. Therefore I'm raising severity and marking as AutomatioBlocker. Direct LUNs do not have serial number and we do not map them in guest agent. It's a limitation of that feature (though not described on the upstream feature page it seems) We would need to come up with a different solution for disk mapping for direct LUN disks, if it is important. Tal, moving back to you for your consideration. This requires design and all, we'll need some way to correlate (LUN ID parsed somehow?) Yaniv, how important do you think it is as it is an RFE (In reply to Tal Nisan from comment #14) > This requires design and all, we'll need some way to correlate (LUN ID > parsed somehow?) > Yaniv, how important do you think it is as it is an RFE The conversation shifted from one bug to another. Was the original regression on the snapshot disk resolved? No. We've worked around this to fetch the disk logical name from the guest instead from the API. Does mapping direct LUNs similar to how we would need it with Cinder volumes? If so this is a RFE that should be scoped as part of that effort. (In reply to Yaniv Lavi from comment #17) > Does mapping direct LUNs similar to how we would need it with Cinder volumes? > If so this is a RFE that should be scoped as part of that effort. Freddy, that's a part of your POC, can you answer please? (In reply to Yaniv Lavi from comment #17) > Does mapping direct LUNs similar to how we would need it with Cinder volumes? > If so this is a RFE that should be scoped as part of that effort. I still don't know how the mapping will work with the Cinder Volumes so I cannot be sure on the answer to that. This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP. Removing AutomationBlocker as we don't rely on logical_name from the API anymore. Since there's a workaround for this, and it's no longer blocking backup and restore testing, I'm removing blocker+ and deferring This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP. Re-targeting, because these bugs either do not have blocker+, or do not have a patch posted dropping a regression and blocker flags, LUNs never had a mapping since they lack serial number. It is fixed for LUNs (bz 1859092) Need to check if it still happens for snapshot disks. Avihai, can you please check if it still happens with snapshot disks (we suspect that the changes we've made for LUN may have fixed it already)? (In reply to Arik from comment #28) > Avihai, can you please check if it still happens with snapshot disks (we > suspect that the changes we've made for LUN may have fixed it already)? Sophie, please checkout if TestCase6169 results in the latest rhv-4.4.8(check a few runs back as well) to if this issue is still reproduced and replay here. Thank you! (In reply to Avihai from comment #29) > (In reply to Arik from comment #28) > > Avihai, can you please check if it still happens with snapshot disks (we > > suspect that the changes we've made for LUN may have fixed it already)? > > Sophie, please checkout if TestCase6169 results in the latest > rhv-4.4.8(check a few runs back as well) to if this issue is still > reproduced and replay here. > Thank you! Tested on versions: engine-4.4.8.5-0.4.el8ev vdsm-4.40.80.6-1.el8ev This issue doesn't reproduce here, tested it 5 times and it passed successfully in all of them. Thanks Sophie |