A means to easily identify what a Virtual Disk for a VM within RHEVM matches to on the VM itself. For example when you look at a Virtual Machine and click on the Virtual Disks tab it lists Disks1, Disk2, etc. We would like to have something that tied this to what the VM itself sees i.e. Disk1 = vda. This way if we have to remove disk from a VM and then drop them from RHEV we can do so with more confidence that the correct disk is being removed.
I'm not sure why we are not setting, by default, a serial number to the disks. This could be used to make the identification easily.
Can be an option indeed, I saw that Libvirt supports it be using the <serial> tag within the <disk> tag so all we have to do probably is generate a serial to each disk or disk-vm relation and send it when running the VM
*** Bug 1276189 has been marked as a duplicate of this bug. ***
*** Bug 1360977 has been marked as a duplicate of this bug. ***
Currently we send the serial number in the Libvirt XML which allows correlating the disk to the device by using /dev/disk/by-id For showing that info in the UI we can use the logical name collected in the disk stats but that methods is not 100% accurate as the info is reported once every few minutes and also it might change in the next boot of the guest and also requires of course guest tools to be installed on the guest. Does this option makes sense to you?
As per comment 1, the expectation is that the name that the disk has in the guest OS is exposed in webadmin portal, for example so end user knows with certainty which disk is being removed. (In reply to Josep 'Pep' Turro Mauri from comment #1) > > 5. How would the customer like to achieve this? (List the functional > requirements here) > > It could appear in an additional column in the VM's "disks" sub-tab, in the > API's vm/ID/disks section, or ideally in both. > > Right now the information seems to be accessible to vdsm, at least for > VirtIO devices. E.g. a "vdsClient list" for a RHEL6 VM includes this: > > { > "address": { > "slot": "0x05", > ... > }, > ... > "alias": "virtio-disk0", > "imageID": "1e07b659-2a5f-4a46-a6d6-09374192f076", > ... > "name": "vda", > ... > > so the request would be to visualize this in the webadmin portal / API.
This info is only accessible from within the guest and can be reported to Engine only with guest tools installed as I stated in comment 14 with the limitations I've mentioned
As per the above comment this info is available at least from the host: > Right now the information seems to be accessible to vdsm, at least for > VirtIO devices. E.g. a "vdsClient list" for a RHEL6 VM includes this: > > { > "address": { > "slot": "0x05", > ... > }, > ... > "alias": "virtio-disk0", > "imageID": "1e07b659-2a5f-4a46-a6d6-09374192f076", > ... > "name": "vda", ^^^^^ It may only be available when the guest tools are installed indeed. Customer is requesting this to be displayed by engine.
This information is already collected by ovirt (via guest tools), its just not shown in gui. for example here via api: /ovirt-engine/api/vms/[id]/diskattachments [...] <logical_name>/dev/vda</logical_name> [...]
(In reply to Klaas Demter from comment #21) > This information is already collected by ovirt (via guest tools), its just > not shown in gui. > for example here via api: > /ovirt-engine/api/vms/[id]/diskattachments > [...] > <logical_name>/dev/vda</logical_name> > [...] - Need to check it's not the libvirt default naming scheme... - Need to check the behavior without an agent. - Need to check the behavior with direct LUN, virtio-SCSI - Need to check the behavior with hotplug/unplug disks. - Need to check the behavior with non regular disk names (LVs, etc.), Windows, etc.
We agreed to remove RFEs component from Bugzilla, if you feel the component has been renamed incorrectly please reach out.
@Shani Do you need any UX help with this RFE?
Verified ovirt-engine-4.3.0-0.8.rc2.el7.noarch vdsm-4.30.6-1.el7ev.x86_64 Present in the UI the correlation between virtual disks in a VM and what the VM sees (Logical name: /dev/vda) API: <disk_attachment href="/ovirt-engine/api/vms/cfa6a716-4dfc-49f6-bced-5de082919d8f/diskattachments/cc9d4cbb-b67a-4c8d-84b2-5ddf91fdb943" id="cc9d4cbb-b67a-4c8d-84b2-5ddf91fdb943"> <active>true</active> <bootable>true</bootable> <interface>virtio</interface> <logical_name>/dev/vda</logical_name> <pass_discard>false</pass_discard> <read_only>false</read_only> <uses_scsi_reservation>false</uses_scsi_reservation> <disk href="/ovirt-engine/api/disks/cc9d4cbb-b67a-4c8d-84b2-5ddf91fdb943" id="cc9d4cbb-b67a-4c8d-84b2-5ddf91fdb943"/> <vm href="/ovirt-engine/api/vms/cfa6a716-4dfc-49f6-bced-5de082919d8f" id="cfa6a716-4dfc-49f6-bced-5de082919d8f"/> </disk_attachment> </disk_attachments>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:1085