Bug 1919857 - Consume disk logical names from Libvirt (RHEL 8.5)
Summary: Consume disk logical names from Libvirt (RHEL 8.5)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: 4.40.50.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.5.0
: 4.50.0.10
Assignee: Tomáš Golembiovský
QA Contact: Qin Yuan
URL:
Whiteboard:
Depends On: 1899527 1949486
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-25 10:02 UTC by Arik
Modified: 2022-04-20 06:33 UTC (History)
3 users (show)

Fixed In Version: vdsm-4.50.0.10
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-20 06:33:59 UTC
oVirt Team: Virt
Embargoed:
pm-rhel: ovirt-4.5?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 114086 0 master MERGED qga: use libvirt API to gather disk information 2021-12-01 09:33:57 UTC
oVirt gerrit 114087 0 master ABANDONED spec: bump libvirt version to 7.0.0 2021-09-26 01:58:40 UTC

Description Arik 2021-01-25 10:02:18 UTC
We consume it from qemu-guest-agent directly now but with bz 1899527, we can do that through libvirt

Comment 1 Qin Yuan 2022-04-18 05:11:11 UTC
Verified with:
ovirt-engine-4.5.0.2-0.7.el8ev.noarch
vdsm-4.50.0.12-1.el8ev.x86_64
libvirt-8.0.0-5.module+el8.6.0+14480+c0a3aa0f.x86_64
qemu-guest-agent-6.2.0-10.module+el8.6.0+14540+5dcf03db.x86_64

Steps:
1. Create a VM from template(used latest-rhel-guest-image-8.6-infra, it has RHEL8.6 installed, has one disk)
2. Create a new disk without file system
3. Attach the new disk to the VM created in step1
4. Start the VM
5. Check the VM disks logical names on engine UI, and API

Results:
Both the OS disk and the attached new disk without mounted file system get logical names on engine UI and API.

# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0    5G  0 disk 
sr0     11:0    1 1024M  0 rom  
vda    252:0    0   10G  0 disk 
├─vda1 252:1    0    1M  0 part 
├─vda2 252:2    0  100M  0 part /boot/efi
└─vda3 252:3    0  9.9G  0 part /


<disk_attachments>
    <disk_attachment href="/ovirt-engine/api/vms/f49c2fd0-f992-48e3-ae05-7a96316caf77/diskattachments/ca566ff4-faad-44cc-8338-aba9989b3a35" id="ca566ff4-faad-44cc-8338-aba9989b3a35">
        <active>true</active>
        <bootable>true</bootable>
        <interface>virtio</interface>
        <logical_name>/dev/vda</logical_name>
        <pass_discard>false</pass_discard>
        <read_only>false</read_only>
        <uses_scsi_reservation>false</uses_scsi_reservation>
        <disk href="/ovirt-engine/api/disks/ca566ff4-faad-44cc-8338-aba9989b3a35" id="ca566ff4-faad-44cc-8338-aba9989b3a35"/>
        <vm href="/ovirt-engine/api/vms/f49c2fd0-f992-48e3-ae05-7a96316caf77" id="f49c2fd0-f992-48e3-ae05-7a96316caf77"/>
    </disk_attachment>
    <disk_attachment href="/ovirt-engine/api/vms/f49c2fd0-f992-48e3-ae05-7a96316caf77/diskattachments/36580235-db02-4802-b46a-77759ebdc98b" id="36580235-db02-4802-b46a-77759ebdc98b">
        <active>true</active>
        <bootable>false</bootable>
        <interface>virtio_scsi</interface>
        <logical_name>/dev/sda</logical_name>
        <pass_discard>false</pass_discard>
        <read_only>false</read_only>
        <uses_scsi_reservation>false</uses_scsi_reservation>
        <disk href="/ovirt-engine/api/disks/36580235-db02-4802-b46a-77759ebdc98b" id="36580235-db02-4802-b46a-77759ebdc98b"/>
        <vm href="/ovirt-engine/api/vms/f49c2fd0-f992-48e3-ae05-7a96316caf77" id="f49c2fd0-f992-48e3-ae05-7a96316caf77"/>
    </disk_attachment>
</disk_attachments>

Comment 2 Sandro Bonazzola 2022-04-20 06:33:59 UTC
This bugzilla is included in oVirt 4.5.0 release, published on April 20th 2022.

Since the problem described in this bug report should be resolved in oVirt 4.5.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.