Hide Forgot
In RHEL 7 and below, ovirt-guest-agent reported logical names also of disks with no file-system mounted. We need qemu-guest-agent to do the same on RHEL 8, otherwise RHV users that upgrade their guests to RHEL 8 would miss that information.
patch posted: https://lists.nongnu.org/archive/html/qemu-devel/2020-08/msg00775.html
(In reply to Arik from comment #0) > In RHEL 7 and below, ovirt-guest-agent reported logical names also of disks > with no file-system mounted. We need qemu-guest-agent to do the same on RHEL > 8, otherwise RHV users that upgrade their guests to RHEL 8 would miss that > information. Based on the description you updated, I have the following question to confirm with you. 1.What's the exactly meaning of logical names of disks with no file-system mounted? With my understanding that means a new disk without format and don't mount file-system, just like '/dev/sdb,/dev/vdb' for linux and '\\?\Volume{52bf9a9a-da80-11ea-9a9d-806e6f6e6963}' for windows,right?
Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda). Not sure whether logical names can be as you've mentioned on windows, I saw things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version The idea is to have those logical names reported also for disks with no file-system mounted on the guest OS
(In reply to Arik from comment #3) > Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda). > Not sure whether logical names can be as you've mentioned on windows, I saw > things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version > The idea is to have those logical names reported also for disks with no > file-system mounted on the guest OS Hi Arik, with your explanation and I tried to reproduce it with RHEL7.3,RHEL7.4 guest,But I didn't found 'guest-get-disk' command that qemu-guest-agent command set supported.
(In reply to dehanmeng from comment #11) > (In reply to Arik from comment #3) > > Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda). > > Not sure whether logical names can be as you've mentioned on windows, I saw > > things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version > > The idea is to have those logical names reported also for disks with no > > file-system mounted on the guest OS > > Hi Arik, with your explanation and I tried to reproduce it with > RHEL7.3,RHEL7.4 guest,But I didn't found 'guest-get-disk' command that > qemu-guest-agent command set supported. @ahadas
(In reply to dehanmeng from comment #12) > (In reply to dehanmeng from comment #11) > > (In reply to Arik from comment #3) > > > Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda). > > > Not sure whether logical names can be as you've mentioned on windows, I saw > > > things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version > > > The idea is to have those logical names reported also for disks with no > > > file-system mounted on the guest OS > > > > Hi Arik, with your explanation and I tried to reproduce it with > > RHEL7.3,RHEL7.4 guest,But I didn't found 'guest-get-disk' command that > > qemu-guest-agent command set supported. > > @ahadas The described behavior is talking about ovirt-guest-agent which is a different agent. The guest-get-disks is a new command that we plan to introduce into qemu-ga.
(In reply to Tomáš Golembiovský from comment #13) > (In reply to dehanmeng from comment #12) > > (In reply to dehanmeng from comment #11) > > > (In reply to Arik from comment #3) > > > > Yes, the logical names on linux are like you've mentioned (e.g., /dev/vda). > > > > Not sure whether logical names can be as you've mentioned on windows, I saw > > > > things like \\.\PHYSICALDRIVE0 - maybe it depends on the Windows version > > > > The idea is to have those logical names reported also for disks with no > > > > file-system mounted on the guest OS > > > > > > Hi Arik, with your explanation and I tried to reproduce it with > > > RHEL7.3,RHEL7.4 guest,But I didn't found 'guest-get-disk' command that > > > qemu-guest-agent command set supported. > > > > @ahadas > > The described behavior is talking about ovirt-guest-agent which is a > different agent. The guest-get-disks is a new command that we plan to > introduce into qemu-ga. okay,then I'll keep following to wait for new command for qemu-ga, thanks.
The last patch version v4 is here: https://lists.nongnu.org/archive/html/qemu-devel/2020-10/msg03071.html It is mostly reviewed (thanks Marc-Andre!), but I think we still need somebody to go through the windows part of the code. If I read the dates correctly there is soft freeze for 5.2 on Oct 27th. We cannot push the maintainer but it still would be helpful to get at least the review process finished.
The patches were merged to QEMU and will be in 5.2. John, what will be the next step here? Will this be backported downstream or will RHEL wait for rebase?
The qemu-5.2 release will be rebased for RHEL-AV 8.4.0, so unless you need RHEL-AV 8.3.1 support, then there's nothing to do for RHEL-AV. In order to get patches into a RHEL release, one would need to backport the patches to a specific RHEL release. The current RHEL release under development is 8.4.0. I can ask Marc-Andre to pick that up unless you feel comfortable with posting downstream patches to/for qemu. Will there need to be libvirt patches? It seems as if there's direct guest agent calls being made, so perhaps not, but I'm not clear on how this is all implemented so I figured I'd ask.
(In reply to John Ferlan from comment #17) > The qemu-5.2 release will be rebased for RHEL-AV 8.4.0, so unless you need > RHEL-AV 8.3.1 support, then there's nothing to do for RHEL-AV. > > In order to get patches into a RHEL release, one would need to backport the > patches to a specific RHEL release. Martin, which RHEL version we wanted to target? I assume we want to squeeze it into RHEL 8.4.0? > The current RHEL release under > development is 8.4.0. I can ask Marc-Andre to pick that up unless you feel > comfortable with posting downstream patches to/for qemu. John, rather not. Last time I tried it ended up as a disaster. ;) > > Will there need to be libvirt patches? It seems as if there's direct guest > agent calls being made, so perhaps not, but I'm not clear on how this is all > implemented so I figured I'd ask. In VDSM we use libvirt API to the agent when it is available. If there is no API we run the commands directly. As for this feature there are no libvirt patches yet (and it is not on my list).
Let's target RHEL 8.4.0 for starters - if zstreams are desired, then those can be requested.
I have a backport handy for 8.4 (qemu-kvm-4.2.0). I suppose we also need libvirt support, which apparently nobody worked on yet upstream.
(In reply to John Ferlan from comment #19) > Let's target RHEL 8.4.0 for starters - if zstreams are desired, then those > can be requested. I am fine with 8.4 @ahadas Any concerns from your side?
(In reply to Marc-Andre Lureau from comment #20) > I have a backport handy for 8.4 (qemu-kvm-4.2.0). > > I suppose we also need libvirt support, which apparently nobody worked on > yet upstream. @jsuchane Can you check if we need libvirt support for this? If so, could you please create the needed BZ?
(In reply to Martin Tessun from comment #21) > (In reply to John Ferlan from comment #19) > > Let's target RHEL 8.4.0 for starters - if zstreams are desired, then those > > can be requested. > > I am fine with 8.4 > @ahadas Any concerns from your side? We've been told this blocks upgrades to RHV 4.4 and we have it all set to consume it in RHV 4.4.3 already - so I think this one worth a backport to RHEL 8.3.z (without libvirt support that we'll switch to later; no need for AV)
(In reply to Martin Tessun from comment #22) > (In reply to Marc-Andre Lureau from comment #20) > > I have a backport handy for 8.4 (qemu-kvm-4.2.0). > > > > I suppose we also need libvirt support, which apparently nobody worked on > > yet upstream. > > @jsuchane Can you check if we need libvirt support for this? If > so, could you please create the needed BZ? Libvirt support is tracked in bug 1899527. Marc-Andre posted initial patches here: https://patchew.org/Libvirt/20201120180948.203254-1-marcandre.lureau@redhat.com/ Libvirt patches were pushed today and will be part of libvirt release 7.0.0.
Verify with version qemu-kvm-4.2.0-39.module+el8.4.0+9248+2cae4f71 Steps to Reproduce: 1.Start guest with virtio serial and start guest agent inside the guest and add two disks. 2.Connect to the chardev socket in host side for sending following command to the guest: # nc -U /tmp/qga.sock {"execute":"guest-info"} {"return": {"version": "4.2.0", "supported_commands": [{"enabled": true, "name": "guest-get-osinfo", "success-response": true}, {"enabled": true, "name": "guest-get-timezone", "success-xxxxxx 3.execute guest-get-disks in the unix socked, but there are some test situation like following items: 1).Just get disks info {"execute":"guest-get-disks"} {"return": [{"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]} Check on guest: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel_vm--74--148-root 253:0 0 17G 0 lvm / └─rhel_vm--74--148-swap 253:1 0 2G 0 lvm [SWAP] vda 252:0 0 20G 0 disk 2). Poweroff guest and add a new data.qcow2 disk to guest, then reboot guest to check disk info: {"execute":"guest-get-disks"} {"return": [{"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/vdb", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 8, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vdb", "target": 0}}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]} Check on guest: #lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel_vm--74--148-root 253:0 0 17G 0 lvm / └─rhel_vm--74--148-swap 253:1 0 2G 0 lvm [SWAP] vda 252:0 0 20G 0 disk vdb 252:16 0 20G 0 disk 3). Partition, format and mount one of the disks, check info: {"execute":"guest-get-disks"} {"return": [{"name": "/dev/vda1", "dependencies": ["/dev/vda"], "partition": true}, {"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/vdb", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 8, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vdb", "target": 0}}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]} Check on guest: #lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel_vm--74--148-root 253:0 0 17G 0 lvm / └─rhel_vm--74--148-swap 253:1 0 2G 0 lvm [SWAP] vda 252:0 0 20G 0 disk └─vda1 252:1 0 1M 0 part /mnt/vda1-test vdb 252:16 0 20G 0 disk 4) umount and delete partition in guest and check disks info {"execute":"guest-get-disks"} {"return": [{"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/vdb", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 8, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vdb", "target": 0}}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]} Check on guest: #lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel_vm--74--148-root 253:0 0 17G 0 lvm / └─rhel_vm--74--148-swap 253:1 0 2G 0 lvm [SWAP] vda 252:0 0 20G 0 disk vdb 252:16 0 20G 0 disk 5) delete one disk in guest and check disks info {"execute":"guest-get-disks"} {"return": [{"name": "/dev/vda", "dependencies": [], "partition": false, "address": {"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 7, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/vda", "target": 0}}, {"name": "/dev/sda1", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda2", "dependencies": ["/dev/sda"], "partition": true}, {"name": "/dev/sda", "dependencies": [], "partition": false, "address": {"bus-type": "scsi", "bus": 0, "unit": 0, "pci-controller": {"bus": 5, "slot": 0, "domain": 0, "function": 0}, "dev": "/dev/sda", "target": 0}}, {"name": "/dev/dm-0", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-root"}, {"name": "/dev/dm-1", "dependencies": ["/dev/sda2"], "partition": false, "alias": "rhel_vm--74--148-swap"}]} Check on guest: #lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel_vm--74--148-root 253:0 0 17G 0 lvm / └─rhel_vm--74--148-swap 253:1 0 2G 0 lvm [SWAP] vda 252:0 0 20G 0 disk
ztream approval and FeatureFeature removed, the BZ has not been approved during blocker meeting cal. Feel free to close this BZ
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:1762