Description of problem: Logical name of disk is empty after detaching and attaching that to VM. In my opinion, it occurs when disk operations (detach & attach) happen within the DiskMapping notification cycle of the guest agent. As a result, the DiskMapping hash remains the same and the logical name is not updated. - ref link: https://github.com/oVirt/vdsm/blob/ovirt-4.3/lib/vdsm/virt/vm.py#L2020 guest information is as follows: - OS: RHEL 7.6 - guest Agent: - ovirt-guest-agent-common-1.0.16-1.el7ev - qemu-guest-agent-2.12.0 but, it appears to be reproducible regardless of the guest OS and agent. Version-Release number of selected component (if applicable): RHV Version 4.3.11.4-0.1.el7 vdsm-4.30.51-1.el7ev How reproducible: 80% Steps to Reproduce: 1.hot-unplug a disk of a VM 2.hot-plug the same disk to the VM again "quickly" Actual results: On admin portal, logical name of the disk is empty Also, DB data related to it is null (logical_name column in vm_device table) Expected results: the logical name of the disk is displayed Additional info:
or any idea like restarting daemon to display logical name without affecting VM?
Please upgrade to latest RHV 4.4SP1 and check again, if it is a timing issue you may just need to wait a bit longer to re-plug.
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.
(In reply to Michal Skrivanek from comment #2) > Please upgrade to latest RHV 4.4SP1 and check again, if it is a timing issue > you may just need to wait a bit longer to re-plug. Is the latest version similar to that of oVirt? because it seems that the latest vdsm of oVirt shows the same results.
ok, it's likely the same behavior. It's indeed probably timing between guest agent updates. Note it's unlikely anyone would be looking further into it for RHV as the product entered maintenance phase. You may want to open an issue upstream and contribute a patch there
(In reply to Michal Skrivanek from comment #5) > ok, it's likely the same behavior. It's indeed probably timing between guest > agent updates. > Note it's unlikely anyone would be looking further into it for RHV as the > product entered maintenance phase. You may want to open an issue upstream > and contribute a patch there okay, I will try. Thanks for your help~!
it doesn't sounds serious and not a frequent use-case, simple workaround exists -> not worth tracking for RHV in maintenance phase.