Bug 1836661

Summary: [RFE] GET diskattachments for a VM using qemu-guest-agent is missing a logical_name for disks without monted file-system
Product: Red Hat Enterprise Virtualization Manager Reporter: Jeongtae Kim <jeokim>
Component: vdsmAssignee: Tomáš Golembiovský <tgolembi>
Status: CLOSED ERRATA QA Contact: Qin Yuan <qiyuan>
Severity: high Docs Contact:
Priority: high    
Version: 4.3.9CC: aefrat, ahadas, bugs, dfodor, emarcus, gveitmic, jko, lsurette, marcandre.lureau, mkalinin, mtessun, pelauter, srevivo, tamir, ycui
Target Milestone: ovirt-4.4.5Keywords: FutureFeature, ZStream
Target Release: ---Flags: jeokim: needinfo+
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: vdsm-4.40.50.4 Doc Type: Enhancement
Doc Text:
Previously the logical names for disks without a mounted filesystem were not displayed in the Red Hat Virtualization Manager. In this release, logical names for such disks are properly reported provided the version of QEMU Guest Agent in the virtual machine is 5.2 or higher.
Story Points: ---
Clone Of:
: 1916159 (view as bug list) Environment:
Last Closed: 2021-04-14 11:38:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1877675, 1913818    
Bug Blocks: 1916159    

Description Jeongtae Kim 2020-05-17 17:02:11 UTC
Description of problem:
The REST api for "diskattachments" should have for each disk its logical name as appears in the VM, but this value is missing from the diskattachments.
This issue appears when it is a VM using "qemu-guest-agent", and when using "ovirt-guest-agent" it displays logical-name.


Version-Release number of selected component (if applicable):
* RHVM:
 - rhvm-4.3.9.4-11.el7.noarch
 - ovirt-engine-4.3.9.4-11.el7.noarch
* RHVHs:
 - vdsm-4.30.44-1.el7ev.x86_64
 - qemu-kvm-rhev-2.12.0-44.el7_8.2.x86_64

* Guest VMs:
 - Test VM1 (e.g. 91860544-b90e-4b0d-a4e3-20f0d7c72acf)
  * REHL7.8 with overt-guest-agent 
  * with attached two more unmounded disk
 - Test VM2 (e.g. 50016c46-c626-4dd0-aba4-d80f27005357)
  * REHL8.2(or rhel7.8) with qemu-guest-agent-2.12.0-99.module+el8.2.0+5827+8c39933c.x86_64
  * with attached two more unmounded disk


How reproducible:
100%


Steps to Reproduce:
1. Create Rhel8.x VM(with qemu-guest-agent)
2. Create and attach a new Disk and this disk is not mounted
3. Check Logical Name of the disks via UI and REST API


Actual results:
The newly attached and unmouned disk of the VM using "qemu-guest-agent" is missing logical name.


Expected results:
A VM using ovirt-guest-agent can appear the logical name of a new unmounted disk.
Disk's Logical Name should be there for each disk attachments on REST API and UI even though using qemu-geust-agent.
This should be included in RHV 4.3.z as well as the master version.


Additional info:
VM1 using Ovirt-guest-agent and VM2 using qemu-guest-agent are compared.

* VM1
===============================================================
- Disk information on DB ————————————
[root@rhvm ~]# /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select device_id,logical_name from vm_device_view where device='disk' and vm_id = '91860544-b90e-4b0d-a4e3-20f0d7c72acf'"
              device_id               | logical_name
--------------------------------------+--------------
 d3cf2b14-0bcf-44b5-86fd-30d3d2373d27 | /dev/sda
 5eee50c0-23e9-4053-a44b-101c924bf149 | /dev/sdb
 8290b2a2-a7d9-4f2f-a23e-1110c66dd3da | /dev/sdc
 
- Diskattachments REST api ———————————
[root@rhvm ~]# curl --request GET --header 'Version: 4' --header "Accept: application/json" --cacert rhvm_ca.pem --user "admin@internal:redhat" https://rhvm.jtredhat.com/ovirt-engine/api/vms/91860544-b90e-4b0d-a4e3-20f0d7c72acf/diskattachments
{
  "disk_attachment" : [ {
    "active" : "true",
    "bootable" : "true",
    "interface" : "virtio_scsi",
    "logical_name" : "/dev/sda",
    "pass_discard" : "false",
    "read_only" : "false",
    "uses_scsi_reservation" : "false",
    "disk" : {
      "href" : "/ovirt-engine/api/disks/d3cf2b14-0bcf-44b5-86fd-30d3d2373d27",
      "id" : "d3cf2b14-0bcf-44b5-86fd-30d3d2373d27"
    },
    "vm" : {
      "href" : "/ovirt-engine/api/vms/91860544-b90e-4b0d-a4e3-20f0d7c72acf",
      "id" : "91860544-b90e-4b0d-a4e3-20f0d7c72acf"
    },
    "href" : "/ovirt-engine/api/vms/91860544-b90e-4b0d-a4e3-20f0d7c72acf/diskattachments/d3cf2b14-0bcf-44b5-86fd-30d3d2373d27",
    "id" : "d3cf2b14-0bcf-44b5-86fd-30d3d2373d27"
  }, {
    "active" : "true",
    "bootable" : "false",
    "interface" : "virtio_scsi",
    "logical_name" : "/dev/sdb",
    "pass_discard" : "false",
    "read_only" : "false",
    "uses_scsi_reservation" : "false",
    "disk" : {
      "href" : "/ovirt-engine/api/disks/5eee50c0-23e9-4053-a44b-101c924bf149",
      "id" : "5eee50c0-23e9-4053-a44b-101c924bf149"
    },
    "vm" : {
      "href" : "/ovirt-engine/api/vms/91860544-b90e-4b0d-a4e3-20f0d7c72acf",
      "id" : "91860544-b90e-4b0d-a4e3-20f0d7c72acf"
    },
    "href" : "/ovirt-engine/api/vms/91860544-b90e-4b0d-a4e3-20f0d7c72acf/diskattachments/5eee50c0-23e9-4053-a44b-101c924bf149",
    "id" : "5eee50c0-23e9-4053-a44b-101c924bf149"
  }, {
    "active" : "true",
    "bootable" : "false",
    "interface" : "virtio_scsi",
    "logical_name" : "/dev/sdc",
    "pass_discard" : "false",
    "read_only" : "false",
    "uses_scsi_reservation" : "false",
    "disk" : {
      "href" : "/ovirt-engine/api/disks/8290b2a2-a7d9-4f2f-a23e-1110c66dd3da",
      "id" : "8290b2a2-a7d9-4f2f-a23e-1110c66dd3da"
    },
    "vm" : {
      "href" : "/ovirt-engine/api/vms/91860544-b90e-4b0d-a4e3-20f0d7c72acf",
      "id" : "91860544-b90e-4b0d-a4e3-20f0d7c72acf"
    },
    "href" : "/ovirt-engine/api/vms/91860544-b90e-4b0d-a4e3-20f0d7c72acf/diskattachments/8290b2a2-a7d9-4f2f-a23e-1110c66dd3da",
    "id" : "8290b2a2-a7d9-4f2f-a23e-1110c66dd3da"
  } ]
}

* VM2
===============================================================
- Disk information on DB ————————————
[root@rhvm ~]# /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select device_id,logical_name from vm_device_view where device='disk' and vm_id = '50016c46-c626-4dd0-aba4-d80f27005357'"
              device_id               | logical_name
--------------------------------------+--------------
 d8dce279-94f9-452e-bfa3-b3a291cc3f7e |
 f93ac8ba-00c3-4167-99c3-8e3ca5f2513b | /dev/sda2
 63141989-2ee5-4f06-ae1f-cfea33475727 |

- Diskattachments REST api ———————————
[root@rhvm ~]# curl --request GET --header 'Version: 4' --header "Accept: application/json" --cacert rhvm_ca.pem --user "admin@internal:redhat" https://rhvm.jtredhat.com/ovirt-engine/api/vms/50016c46-c626-4dd0-aba4-d80f27005357/diskattachments
{
  "disk_attachment" : [ {
    "active" : "true",
    "bootable" : "false",
    "interface" : "virtio_scsi",
    "pass_discard" : "false",
    "read_only" : "false",
    "uses_scsi_reservation" : "false",
    "disk" : {
      "href" : "/ovirt-engine/api/disks/d8dce279-94f9-452e-bfa3-b3a291cc3f7e",
      "id" : "d8dce279-94f9-452e-bfa3-b3a291cc3f7e"
    },
    "vm" : {
      "href" : "/ovirt-engine/api/vms/50016c46-c626-4dd0-aba4-d80f27005357",
      "id" : "50016c46-c626-4dd0-aba4-d80f27005357"
    },
    "href" : "/ovirt-engine/api/vms/50016c46-c626-4dd0-aba4-d80f27005357/diskattachments/d8dce279-94f9-452e-bfa3-b3a291cc3f7e",
    "id" : "d8dce279-94f9-452e-bfa3-b3a291cc3f7e"
  }, {
    "active" : "true",
    "bootable" : "true",
    "interface" : "virtio_scsi",
    "logical_name" : "/dev/sda2",
    "pass_discard" : "false",
    "read_only" : "false",
    "uses_scsi_reservation" : "false",
    "disk" : {
      "href" : "/ovirt-engine/api/disks/f93ac8ba-00c3-4167-99c3-8e3ca5f2513b",
      "id" : "f93ac8ba-00c3-4167-99c3-8e3ca5f2513b"
    },
    "vm" : {
      "href" : "/ovirt-engine/api/vms/50016c46-c626-4dd0-aba4-d80f27005357",
      "id" : "50016c46-c626-4dd0-aba4-d80f27005357"
    },
    "href" : "/ovirt-engine/api/vms/50016c46-c626-4dd0-aba4-d80f27005357/diskattachments/f93ac8ba-00c3-4167-99c3-8e3ca5f2513b",
    "id" : "f93ac8ba-00c3-4167-99c3-8e3ca5f2513b"
  }, {
    "active" : "true",
    "bootable" : "false",
    "interface" : "virtio_scsi",
    "pass_discard" : "false",
    "read_only" : "false",
    "uses_scsi_reservation" : "false",
    "disk" : {
      "href" : "/ovirt-engine/api/disks/63141989-2ee5-4f06-ae1f-cfea33475727",
      "id" : "63141989-2ee5-4f06-ae1f-cfea33475727"
    },
    "vm" : {
      "href" : "/ovirt-engine/api/vms/50016c46-c626-4dd0-aba4-d80f27005357",
      "id" : "50016c46-c626-4dd0-aba4-d80f27005357"
    },
    "href" : "/ovirt-engine/api/vms/50016c46-c626-4dd0-aba4-d80f27005357/diskattachments/63141989-2ee5-4f06-ae1f-cfea33475727",
    "id" : "63141989-2ee5-4f06-ae1f-cfea33475727"
  } ]
}


This problem seems to be a bug because it was provided by ovirt-qemu-agent, but if not, please treat it as a REF.

Comment 2 Ryan Barry 2020-05-18 00:24:15 UTC
Not a regression. It's a documented change in functionality. This is a clear duplicate of the RFE. Any reason to keep it open?

Comment 3 Germano Veit Michel 2020-05-18 00:26:21 UTC
(In reply to Ryan Barry from comment #2)
> Not a regression. It's a documented change in functionality. This is a clear
> duplicate of the RFE. Any reason to keep it open?

Do you mean BZ1793290? That is a bug for incorrect parsing of logical_name, not the RFE.
Or is there another RFE for this somewhere else?

Comment 4 Tomáš Golembiovský 2020-05-18 14:11:53 UTC
(In reply to Germano Veit Michel from comment #3)
> (In reply to Ryan Barry from comment #2)
> > Not a regression. It's a documented change in functionality. This is a clear
> > duplicate of the RFE. Any reason to keep it open?
> 
> Do you mean BZ1793290? That is a bug for incorrect parsing of logical_name,
> not the RFE.
> Or is there another RFE for this somewhere else?

I don't think anyone has opened the RFE bug yet, so let's use this one for it.

Note that this requires fix in QEMU Guest Agent

Comment 19 Sandro Bonazzola 2020-12-18 15:34:34 UTC
This bug is in POST status for ovirt 4.4.4. We are now in blocker only phase, please either mark this as a blocker or please re-target.

Comment 25 Arik 2021-01-15 10:41:12 UTC
Cannot be tested without the fix for bz 1913818

Comment 30 Tamir 2021-02-14 08:48:34 UTC
Verified on RHV 4.4.5-5. All looks good to me.

Env:
  - Engine instance with RHV 4.4.5-5 (ovirt-engine-4.4.5.5-0.13.el8ev) and RHEL 8.3 installed.
  - 3 hosts with RHV 4.4.5-5 and RHEL 8.3, 3 hosts with vdsm-4.40.50.5-1.el8ev and qemu-kvm.x86_64 15:5.1.0-19.module+el8.3.1+9795+4ce2a535

Steps:
1. Create a 4.5 data center and a 4.5 cluster.
2. Install the host and create a new NFS storage domain.
3. Import a RHEL 8.3 template from glance.
4. Create a VM from the RHEL 8.3 template with another disk.
5. Install qemu-guest-agent-4.2.0-34.module+el8.3.0+9828+7aab3355.3.x86_64.rpm in the VM.
6. Reboot the VM.
7. Check that the logical name of the created disk is visible both in GUI and DB.  


Results (As Expected):
Actual results:
1. The 4.5 data center and the 4.5 cluster were created.
2. The host was installed and the NFS storage domain was created.
3. The template was imported.
4. The VM was created.
5. The qemu-guest-agent-4.2.0-34.module+el8.3.0+9828+7aab3355.3.x86_64.rpm is installed in the VM.
6. The VM was shutdown and gone up.
7. The logical name of the created disk is visible both in GUI and DB.

Comment 36 errata-xmlrpc 2021-04-14 11:38:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: RHV RHEL Host (ovirt-host) 4.4.z [ovirt-4.4.5] security, bug fix, enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1184