Description of problem: In Compute -> Virtual Machines -> RHEL8 -> Disks -> Logical Name, the user expects to see the path to the disk device inside the VM (i.e. /dev/sda). This comes from VDSM guestDiskMapping. When ovirt-guest-agent is not present (RHEL8) this info comes from qemu-guest-agent. However, VDSM seems to be using the information from "guest-get-fsinfo", which returns mounted filesystems in the Guest, which can be problematic. 1. See this RHEL8 Guest, with 2 disks and only 1 disk has a fs mounted, its missing an entry for the second disk. "guestDiskMapping": { "0QEMU_QEMU_HARDDISK_af3c6af3-0154-4319-b": { "name": "/dev/sda2" } }, The user sees this, there is no mapping for the second disk. engine=# select device_id,logical_name from vm_device_view where device='disk' and vm_id = '7606d359-b199-483c-8c05-92a4392b970a'; device_id | logical_name --------------------------------------+-------------- 8abc438a-46a4-4961-bd78-ad6ab376e526 | af3c6af3-0154-4319-b368-0f64d24d7d72 | /dev/sda2 2. Now I create 4 partitions on the second disk (sdb), a filesystem on sdb4 and mount it: VDSM reports: "guestDiskMapping": { "0QEMU_QEMU_HARDDISK_af3c6af3-0154-4319-b": { "name": "/dev/sda2" }, "0QEMU_QEMU_HARDDISK_8abc438a-46a4-4961-b": { "name": "/dev/sdb4" } }, The user sees: engine=# select device_id,logical_name from vm_device_view where device='disk' and vm_id = '7606d359-b199-483c-8c05-92a4392b970a'; device_id | logical_name --------------------------------------+-------------- 8abc438a-46a4-4961-bd78-ad6ab376e526 | /dev/sdb4 af3c6af3-0154-4319-b368-0f64d24d7d72 | /dev/sda2 Both are wrong, it should be /dev/sdb and /dev/sda. 3. Add a third disk, with LVM, missing again: engine=# select device_id,logical_name from vm_device_view where device='disk' and vm_id = '7606d359-b199-483c-8c05-92a4392b970a'; device_id | logical_name --------------------------------------+-------------- 8abc438a-46a4-4961-bd78-ad6ab376e526 | /dev/sdb4 af3c6af3-0154-4319-b368-0f64d24d7d72 | /dev/sda2 b603caa2-6b39-4e52-9182-a915e1764186 | 4. Create a filesystem on the LVM and mount it. Finally it shows up, and its the only correct one this time. engine=# select device_id,logical_name from vm_device_view where device='disk' and vm_id = '7606d359-b199-483c-8c05-92a4392b970a'; device_id | logical_name --------------------------------------+-------------- 8abc438a-46a4-4961-bd78-ad6ab376e526 | /dev/sdb4 af3c6af3-0154-4319-b368-0f64d24d7d72 | /dev/sda2 b603caa2-6b39-4e52-9182-a915e1764186 | /dev/sdc So the problems are: * No mapping if file-system not mounted * The mapping can be to a partition instead of the device Version-Release number of selected component (if applicable): vdsm-4.30.38-1.el7ev.x86_64 How reproducible: Always Steps to Reproduce: 1. Install RHEL8 Guest 2. Notice that 'Logical Name' in VM Disks tab comes from mounted filesystem device paths, not the disks. 3. If file-system not mounted, there is no Logical Name reported. Actual results: * Missing or wrong Logical Names reported Expected results: * Correct Logical Names for disks Additional info: For reference, for the example I gave above. A) Inside the Guest /dev/mapper/rhel-root on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/sdb4 on /mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/germano_vg-lvol0 on /mnt2 type xfs (rw,relatime,seclabel,attr2,inode64,noquota) B) vdsm-client VM getInfo vmID=7606d359-b199-483c-8c05-92a4392b970a "guestDiskMapping": { "0QEMU_QEMU_HARDDISK_af3c6af3-0154-4319-b": { "name": "/dev/sda2" }, "0QEMU_QEMU_HARDDISK_b603caa2-6b39-4e52-9": { "name": "/dev/sdc" }, "0QEMU_QEMU_HARDDISK_8abc438a-46a4-4961-b": { "name": "/dev/sdb4" } C) virsh qemu-agent-command RHEL8 '{"execute":"guest-get-fsinfo"}' { "return": [ { "disk": [ { "bus": 0, "bus-type": "scsi", "dev": "/dev/sdc", "pci-controller": { "bus": 0, "domain": 0, "function": 0, "slot": 5 }, "serial": "0QEMU_QEMU_HARDDISK_b603caa2-6b39-4e52-9", "target": 0, "unit": 3 } ], "mountpoint": "/mnt2", "name": "dm-2", "type": "xfs" }, { "disk": [ { "bus": 0, "bus-type": "scsi", "dev": "/dev/sdb4", "pci-controller": { "bus": 0, "domain": 0, "function": 0, "slot": 5 }, "serial": "0QEMU_QEMU_HARDDISK_8abc438a-46a4-4961-b", "target": 0, "unit": 2 } ], "mountpoint": "/mnt", "name": "sdb4", "type": "xfs" }, { "disk": [ { "bus": 0, "bus-type": "scsi", "dev": "/dev/sda1", "pci-controller": { "bus": 0, "domain": 0, "function": 0, "slot": 5 }, "serial": "0QEMU_QEMU_HARDDISK_af3c6af3-0154-4319-b", "target": 0, "unit": 0 } ], "mountpoint": "/boot", "name": "sda1", "type": "xfs" }, { "disk": [ { "bus": 0, "bus-type": "scsi", "dev": "/dev/sda2", "pci-controller": { "bus": 0, "domain": 0, "function": 0, "slot": 5 }, "serial": "0QEMU_QEMU_HARDDISK_af3c6af3-0154-4319-b", "target": 0, "unit": 0 } ], "mountpoint": "/", "name": "dm-0", "type": "xfs" } ] }
Note that disks with LVM have wrong mapping too in case LVM is in a partition instead of whole disk.
This seems to be as good as it can get with current qga in RHEL8. But its certainly a step back when comparing to RHEL7 and ovirt-guest-agent. I get correct device names (and not partitions) on RHEL7 guests, even with no file-systems mounted.
Sorry for the delay. Basically there are two issues that you are reporting 1) The path refers to the partition (/dev/sda3) instead of the device (/dev/sda) -- this is possibly bug in VDSM and can be fixed. 2) Disk is not reported if there is no filesystem on it -- this is known limitation and if the feature is needed a RFE bug has to be opened.
*** Bug 1819382 has been marked as a duplicate of this bug. ***
any more patch? If not, please move to MODIFIED
Was this included in this week build?
Yes it was.
$ git tag --contains 66dd1d84ddb918f869409e3dc417ea1a77b61df3 v4.40.14 v4.40.15 v4.40.16 v4.40.17 v4.40.18 v4.40.19 v4.40.20 v4.40.21
too late for 4.4.1, so what's the current state?
(In reply to Michal Skrivanek from comment #15) > too late for 4.4.1, so what's the current state? Waiting for review.
(In reply to Tomáš Golembiovský from comment #17) > (In reply to Michal Skrivanek from comment #15) > > too late for 4.4.1, so what's the current state? > > Waiting for review. What I mean is what's the current behavior with 107170 merged and 110212 not. Because that's what's going out...
Devices with partition number are still being reported. Patch 107170 was for the qemu-ga polling with direct commands and that code is not used anymore. When we switched to querying uisng libvirt API this fix was missed and created regression. Patch 110212 is fixing the libvirt API querying code.
Verified with: vdsm-4.40.24-1.el8ev.x86_64 ovirt-engine-4.4.2.1-0.15.el8ev.noarch Steps: 1. Create a VM from template(latest-rhel-guest-image-8.2-infra) 2. Create a new disk 3. Run VM 4. Partition the new disk and mount fs inside VM 5. Check if the disks logical name are correct Results: 1. The logical names are /dev/vda, /dev/sda, which are correct. Disks info in VM: [root@vm-30-57 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk ├─sda1 8:1 0 2G 0 part /root/test1 ├─sda2 8:2 0 3G 0 part └─sda3 8:3 0 5G 0 part sr0 11:0 1 1024M 0 rom vda 253:0 0 10G 0 disk ├─vda1 253:1 0 1M 0 part ├─vda2 253:2 0 100M 0 part /boot/efi └─vda3 253:3 0 9.9G 0 part / Logical names in engine: engine=# select device_id,logical_name from vm_device_view where device='disk' and vm_id = '9038088b-316c-41b5-abf7-7adf463293e1'; device_id | logical_name --------------------------------------+-------------- b93bbe68-4b17-4f69-8e0c-af8be29717e1 | /dev/vda 5a2a0e79-832e-47ca-abeb-e4067d87b37d | /dev/sda
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV RHEL Host (ovirt-host) 4.4.z [ovirt-4.4.2]), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3822