Bug 1859494

Summary: Report logical_name for disks without mounted file-system
Product: Red Hat Enterprise Linux 8 Reporter: Arik <ahadas>
Component: qemu-kvmAssignee: Marc-Andre Lureau <marcandre.lureau>
qemu-kvm sub component: Guest Agent QA Contact: dehanmeng <demeng>
Status: CLOSED ERRATA Docs Contact:
Severity: urgent    
Priority: urgent CC: coli, gveitmic, jeokim, jferlan, jinzhao, jsuchane, juzhang, lijin, marcandre.lureau, mkalinin, mtessun, toneata, virt-maint, xiagao, zhguo
Version: 8.2Keywords: Triaged, ZStream
Target Milestone: rc   
Target Release: 8.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: qemu-kvm-4.2.0-39.module+el8.4.0+9248+2cae4f71 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1899527 1913818 (view as bug list) Environment:
Last Closed: 2021-05-18 15:21:15 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1877675, 1899527, 1913818    

Description Arik 2020-07-22 09:40:25 UTC
In RHEL 7 and below, ovirt-guest-agent reported logical names also of disks with no file-system mounted. We need qemu-guest-agent to do the same on RHEL 8, otherwise RHV users that upgrade their guests to RHEL 8 would miss that information.

Comment 1 Tomáš Golembiovský 2020-08-06 10:02:35 UTC
patch posted:

https://lists.nongnu.org/archive/html/qemu-devel/2020-08/msg00775.html

Comment 2 dehanmeng 2020-09-28 13:30:12 UTC
(In reply to Arik from comment #0)
> In RHEL 7 and below, ovirt-guest-agent reported logical names also of disks
> with no file-system mounted. We need qemu-guest-agent to do the same on RHEL
> 8, otherwise RHV users that upgrade their guests to RHEL 8 would miss that
> information.

Based on the description you updated, I have the following question to confirm with you.
1.What's the exactly meaning of logical names of disks with no file-system mounted?

With my understanding that means a new disk without format and don't  mount file-system, just like '/dev/sdb,/dev/vdb' for linux and '\\?\Volume{52bf9a9a-da80-11ea-9a9d-806e6f6e6963}' for windows,right?

Comment 3 Arik 2020-09-28 20:35:39 UTC
Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda). Not sure whether logical names can be as you've mentioned on windows, I saw things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version
The idea is to have those logical names reported also for disks with no file-system mounted on the guest OS

Comment 11 dehanmeng 2020-10-12 12:35:38 UTC
(In reply to Arik from comment #3)
> Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda).
> Not sure whether logical names can be as you've mentioned on windows, I saw
> things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version
> The idea is to have those logical names reported also for disks with no
> file-system mounted on the guest OS

Hi Arik, with your explanation and I tried to reproduce it with RHEL7.3,RHEL7.4 guest,But I didn't found 'guest-get-disk' command that qemu-guest-agent command set supported.

Comment 12 dehanmeng 2020-10-12 12:45:52 UTC
(In reply to dehanmeng from comment #11)
> (In reply to Arik from comment #3)
> > Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda).
> > Not sure whether logical names can be as you've mentioned on windows, I saw
> > things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version
> > The idea is to have those logical names reported also for disks with no
> > file-system mounted on the guest OS
> 
> Hi Arik, with your explanation and I tried to reproduce it with
> RHEL7.3,RHEL7.4 guest,But I didn't found 'guest-get-disk' command that
> qemu-guest-agent command set supported.

@ahadas

Comment 13 Tomáš Golembiovský 2020-10-12 13:10:39 UTC
(In reply to dehanmeng from comment #12)
> (In reply to dehanmeng from comment #11)
> > (In reply to Arik from comment #3)
> > > Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda).
> > > Not sure whether logical names can be as you've mentioned on windows, I saw
> > > things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version
> > > The idea is to have those logical names reported also for disks with no
> > > file-system mounted on the guest OS
> > 
> > Hi Arik, with your explanation and I tried to reproduce it with
> > RHEL7.3,RHEL7.4 guest,But I didn't found 'guest-get-disk' command that
> > qemu-guest-agent command set supported.
> 
> @ahadas

The described behavior is talking about ovirt-guest-agent which is a different agent. The guest-get-disks is a new command that we plan to introduce into qemu-ga.

Comment 14 dehanmeng 2020-10-13 00:19:20 UTC
(In reply to Tomáš Golembiovský from comment #13)
> (In reply to dehanmeng from comment #12)
> > (In reply to dehanmeng from comment #11)
> > > (In reply to Arik from comment #3)
> > > > Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda).
> > > > Not sure whether logical names can be as you've mentioned on windows, I saw
> > > > things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version
> > > > The idea is to have those logical names reported also for disks with no
> > > > file-system mounted on the guest OS
> > > 
> > > Hi Arik, with your explanation and I tried to reproduce it with
> > > RHEL7.3,RHEL7.4 guest,But I didn't found 'guest-get-disk' command that
> > > qemu-guest-agent command set supported.
> > 
> > @ahadas
> 
> The described behavior is talking about ovirt-guest-agent which is a
> different agent. The guest-get-disks is a new command that we plan to
> introduce into qemu-ga.

okay,then I'll keep following to wait for new command for qemu-ga, thanks.

Comment 15 Tomáš Golembiovský 2020-10-15 15:28:48 UTC
The last patch version v4 is here: https://lists.nongnu.org/archive/html/qemu-devel/2020-10/msg03071.html
It is mostly reviewed (thanks Marc-Andre!), but I think we still need somebody to go through the windows part of the code.

If I read the dates correctly there is soft freeze for 5.2 on Oct 27th. We cannot push the maintainer but it still would be helpful to get at least the review process finished.

Comment 16 Tomáš Golembiovský 2020-11-09 12:17:52 UTC
The patches were merged to QEMU and will be in 5.2. John, what will be the next step here? Will this be backported downstream or will RHEL wait for rebase?

Comment 17 John Ferlan 2020-11-10 16:49:19 UTC
The qemu-5.2 release will be rebased for RHEL-AV 8.4.0, so unless you need RHEL-AV 8.3.1 support, then there's nothing to do for RHEL-AV.

In order to get patches into a RHEL release, one would need to backport the patches to a specific RHEL release. The current RHEL release under development is 8.4.0. I can ask Marc-Andre to pick that up unless you feel comfortable with posting downstream patches to/for qemu.

Will there need to be libvirt patches? It seems as if there's direct guest agent calls being made, so perhaps not, but I'm not clear on how this is all implemented so I figured I'd ask.

Comment 18 Tomáš Golembiovský 2020-11-10 22:15:18 UTC
(In reply to John Ferlan from comment #17)
> The qemu-5.2 release will be rebased for RHEL-AV 8.4.0, so unless you need
> RHEL-AV 8.3.1 support, then there's nothing to do for RHEL-AV.
> 
> In order to get patches into a RHEL release, one would need to backport the
> patches to a specific RHEL release.

Martin, which RHEL version we wanted to target? I assume we want to squeeze it into RHEL 8.4.0?

> The current RHEL release under
> development is 8.4.0. I can ask Marc-Andre to pick that up unless you feel
> comfortable with posting downstream patches to/for qemu.

John, rather not. Last time I tried it ended up as a disaster. ;)

> 
> Will there need to be libvirt patches? It seems as if there's direct guest
> agent calls being made, so perhaps not, but I'm not clear on how this is all
> implemented so I figured I'd ask.

In VDSM we use libvirt API to the agent when it is available. If there is no API we run the commands directly. As for this feature there are no libvirt patches yet (and it is not on my list).

Comment 19 John Ferlan 2020-11-16 13:39:45 UTC
Let's target RHEL 8.4.0 for starters - if zstreams are desired, then those can be requested.

Comment 20 Marc-Andre Lureau 2020-11-19 13:34:30 UTC
I have a backport handy for 8.4 (qemu-kvm-4.2.0).

I suppose we also need libvirt support, which apparently nobody worked on yet upstream.

Comment 21 Martin Tessun 2020-11-26 08:57:26 UTC
(In reply to John Ferlan from comment #19)
> Let's target RHEL 8.4.0 for starters - if zstreams are desired, then those
> can be requested.

I am fine with 8.4
@ahadas Any concerns from your side?

Comment 22 Martin Tessun 2020-11-26 08:58:37 UTC
(In reply to Marc-Andre Lureau from comment #20)
> I have a backport handy for 8.4 (qemu-kvm-4.2.0).
> 
> I suppose we also need libvirt support, which apparently nobody worked on
> yet upstream.

@jsuchane Can you check if we need libvirt support for this? If so, could you please create the needed BZ?

Comment 23 Arik 2020-11-26 10:42:57 UTC
(In reply to Martin Tessun from comment #21)
> (In reply to John Ferlan from comment #19)
> > Let's target RHEL 8.4.0 for starters - if zstreams are desired, then those
> > can be requested.
> 
> I am fine with 8.4
> @ahadas Any concerns from your side?

We've been told this blocks upgrades to RHV 4.4 and we have it all set to consume it in RHV 4.4.3 already - so I think this one worth a backport to RHEL 8.3.z (without libvirt support that we'll switch to later; no need for AV)

Comment 24 Jaroslav Suchanek 2020-12-01 13:32:32 UTC
(In reply to Martin Tessun from comment #22)
> (In reply to Marc-Andre Lureau from comment #20)
> > I have a backport handy for 8.4 (qemu-kvm-4.2.0).
> > 
> > I suppose we also need libvirt support, which apparently nobody worked on
> > yet upstream.
> 
> @jsuchane Can you check if we need libvirt support for this? If
> so, could you please create the needed BZ?

Libvirt support is tracked in bug 1899527.

Marc-Andre posted initial patches here:
https://patchew.org/Libvirt/20201120180948.203254-1-marcandre.lureau@redhat.com/

Libvirt patches were pushed today and will be part of libvirt release 7.0.0.

Comment 34 dehanmeng 2020-12-25 03:07:49 UTC
Verify with version qemu-kvm-4.2.0-39.module+el8.4.0+9248+2cae4f71
Steps to Reproduce:

1.Start guest with virtio serial and start guest agent inside the guest and add  two disks.
2.Connect to the chardev socket in host side for sending following command to the guest:
# nc -U /tmp/qga.sock 
{"execute":"guest-info"}
{"return": {"version": "4.2.0", "supported_commands": [{"enabled": true, "name": "guest-get-osinfo", "success-response": true}, {"enabled": true, "name": "guest-get-timezone", "success-xxxxxx
3.execute guest-get-disks in the unix socked, but there are some test situation like following items:
1).Just get disks info 
{"execute":"guest-get-disks"}
{"return": [{"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]}
Check on guest:
# lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                         8:0    0  20G  0 disk
├─sda1                      8:1    0   1G  0 part /boot
└─sda2                      8:2    0  19G  0 part
  ├─rhel_vm--74--148-root 253:0    0  17G  0 lvm  /
  └─rhel_vm--74--148-swap 253:1    0   2G  0 lvm  [SWAP]
vda                       252:0    0  20G  0 disk

2). Poweroff guest and add a new data.qcow2 disk to guest, then reboot guest to check disk info:
{"execute":"guest-get-disks"}
{"return": [{"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/vdb", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 8, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vdb", "target": 0}}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]}
Check on guest:
#lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                         8:0    0  20G  0 disk 
├─sda1                      8:1    0   1G  0 part /boot
└─sda2                      8:2    0  19G  0 part 
  ├─rhel_vm--74--148-root 253:0    0  17G  0 lvm  /
  └─rhel_vm--74--148-swap 253:1    0   2G  0 lvm  [SWAP]
vda                       252:0    0  20G  0 disk 
vdb                       252:16   0  20G  0 disk

3). Partition, format and mount one of the disks, check info:
{"execute":"guest-get-disks"}
{"return": [{"name": "/dev/vda1", "dependencies": ["/dev/vda"], "partition": true}, {"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/vdb", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 8, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vdb", "target": 0}}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]}
Check on guest:
#lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                         8:0    0  20G  0 disk 
├─sda1                      8:1    0   1G  0 part /boot
└─sda2                      8:2    0  19G  0 part 
  ├─rhel_vm--74--148-root 253:0    0  17G  0 lvm  /
  └─rhel_vm--74--148-swap 253:1    0   2G  0 lvm  [SWAP]
vda                       252:0    0  20G  0 disk 
└─vda1                    252:1    0   1M  0 part /mnt/vda1-test
vdb                       252:16   0  20G  0 disk

4) umount and delete partition in guest and check disks info
{"execute":"guest-get-disks"}
{"return": [{"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": 
"/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/vdb", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 8, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vdb", "target": 0}}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]}
Check on guest:
#lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                         8:0    0  20G  0 disk 
├─sda1                      8:1    0   1G  0 part /boot
└─sda2                      8:2    0  19G  0 part 
  ├─rhel_vm--74--148-root 253:0    0  17G  0 lvm  /
  └─rhel_vm--74--148-swap 253:1    0   2G  0 lvm  [SWAP]
vda                       252:0    0  20G  0 disk 
vdb                       252:16   0  20G  0 disk

5) delete one disk in guest and check disks info
{"execute":"guest-get-disks"}
{"return": [{"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]} 
Check on guest:
#lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                         8:0    0  20G  0 disk 
├─sda1                      8:1    0   1G  0 part /boot
└─sda2                      8:2    0  19G  0 part 
  ├─rhel_vm--74--148-root 253:0    0  17G  0 lvm  /
  └─rhel_vm--74--148-swap 253:1    0   2G  0 lvm  [SWAP]
vda                       252:0    0  20G  0 disk

Comment 43 Oneata Mircea Teodor 2021-03-25 12:52:08 UTC
ztream approval and FeatureFeature removed, the BZ has not been approved during blocker meeting cal. Feel free to close this BZ

Comment 45 errata-xmlrpc 2021-05-18 15:21:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1762