Bug 1635571

Summary: [RFE] Report disk device name and serial number (qemu-guest-agent on Linux)
Product: Red Hat Enterprise Linux 7 Reporter: Tomáš Golembiovský <tgolembi>
Component: qemu-guest-agentAssignee: Marc-Andre Lureau <marcandre.lureau>
Status: CLOSED ERRATA QA Contact: FuXiangChun <xfu>
Severity: high Docs Contact:
Priority: high    
Version: 7.5CC: chayang, jen, juzhang, knoel, lvrabec, marcandre.lureau, mmalik, mrezanin, mtessun, plautrba, ssekidde, tgolembi, vmojzis, xfu, zpytela
Target Milestone: rcKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-guest-agent-2.12.0-3.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1636178 1636185 1663092 (view as bug list) Environment:
Last Closed: 2019-08-06 12:51:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1663092, 1687721    
Bug Blocks: 1636178, 1636186, 1651787    

Description Tomáš Golembiovský 2018-10-03 09:38:54 UTC
In order to sunset the use in oVirt Guest Agent in RHV we are transitioning features into QEMU Guest Agent.

One of the required features is the ability to map disks to devices in the guest. Internally this done by assigning unique serial number to each disk and the agent then reports available disks and assigned serial numbers.

The initial draft of the feature for QEMU-GA is here:

https://lists.nongnu.org/archive/html/qemu-devel/2018-09/msg00782.html

Comment 3 Tomáš Golembiovský 2018-10-03 09:44:25 UTC
The new code depends on part of Windows code that is currently disabled in the
guest agent. There is no clear way how to deal with the disabled code.


Detailed description:

Part of the Windows code in guest agent is broken (PCI controller info). This
broken code is part of a larger block of code that is disabled in the agent.
Originally it was disabled as a result of a bug, but it is now kept disabled in
order not to expose the broken part. Unfortunately nobody has clear idea how to
fix the broken PCI controller info. New features depend on the disabled code
but do not require the broken part.


There are three ways how to get out of this:

1) enable everything including the broken code

   So far the maintainer is against this solution.

2) enable the disabled code and keep disabled only the broken part

   From our perspective this is OK as we don't need the broken part (PCI
   controller info). The rest of the disabled code is either OK or
   fixable.

3) fix the PCI controller info code and enable everything

   This would require assistance from somebody skilled in Windows API
   who knows the dark arts of querying device objects.

Comment 4 Jeff Nelson 2018-10-03 18:48:37 UTC
Does this change need to be made to the Window guest agent, the Linux guest agent, or both?

Comment 5 Tomáš Golembiovský 2018-10-04 09:32:23 UTC
Both. And the Linux part is fairly easy.

Comment 6 Tomáš Golembiovský 2018-10-04 11:54:09 UTC
Updated patches:

https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg00685.html

Comment 7 Jeff Nelson 2018-10-04 16:19:54 UTC
> Both. And the Linux part is fairly easy.

Thanks. This BZ will track the Linux changes. I created BZ 1636178 for the Windows guest agent.

Comment 8 Jeff Nelson 2018-10-04 16:35:38 UTC
Cloned for RHEL-8: BZ 1636185.

Comment 9 Tomáš Golembiovský 2018-10-31 09:01:27 UTC
the relevant patches are on their way upstream:

https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg06761.html

Comment 10 Tomáš Golembiovský 2018-10-31 17:53:11 UTC
The changes are non-intrusive and seamlessly fall into the present qemu-ga code.
Only new requirement is that qemu-ga is liked with libudev.

The relevant patches are:

  * configure: add test for libudev
  * qga: linux: report disk serial number
  * qga: linux: return disk device in guest-get-fsinfo  


To test the functionality one has to assign serial number (string) to the disks of the VM.
E.g. like this:

qemu-system-x86_64 -enable-kvm -m 2048 \
    -drive file=fedora-28.img,id=drive0,if=none \
    -device virtio-blk-pci,drive=drive0,serial=MYDISK-1 \
    -drive file=empty.qcow2,id=drive1,if=none \
    -device virtio-blk-pci,drive=drive1,serial=MYDISK-2 \
    -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x6 \
    -chardev socket,id=charchannel1,server,nowait,path=qga.sock \
    -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 


The changes can be observed in the output of "guest-get-fsinfo" command of the
agent.

$ socat readline ./qga.sock
{"execute":"guest-get-fsinfo"}
{"return": [{"name": "vda2", "total-bytes": 952840192, "mountpoint": "/boot", "disk": [{"serial": "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 4, "domain": 0, "function": 0}, "dev": "/dev/vda2", "target": 0}], "used-bytes": 122990592, "type": "ext4"}, {"name": "vdb1", "total-bytes": 2005381120, "mountpoint": "/mnt", "disk": [{"serial": "MYDISK-2", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 5, "domain": 0, "function": 0}, "dev": "/dev/vdb1", "target": 0}], "used-bytes": 3145728, "type": "ext2"}, {"name": "vda4", "total-bytes": 4710203392, "mountpoint": "/", "disk": [{"serial": "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 4, "domain": 0, "function": 0}, "dev": "/dev/vda4", "target": 0}], "used-bytes": 3075346432, "type": "xfs"}]}

For each "disk" entry in the output there are two new fields "dev" and
"serial".

Comment 12 Miroslav Rezanina 2018-12-12 16:03:54 UTC
Fix included in qemu-guest-agent-2.12.0-3.el7

Comment 13 FuXiangChun 2018-12-19 05:48:20 UTC
Try to verify this bug with qemu-guest-agent-2.12.0-3.el7 and qemu-kvm-rhev-2.12.0-20.el7.x86_64.

/usr/libexec/qemu-kvm -M pc -cpu Opteron_G4 -nodefaults -smp 4 -m 4G -name rhel7.5-pc \
-drive file=/home/rhel7.6-954-scsi.qcow2,id=drive0,if=none -device virtio-blk-pci,drive=drive0,serial=MYDISK-1 \
-drive file=empty.qcow2,id=drive1,if=none -device virtio-blk-pci,drive=drive1,serial=MYDISK-2 \
-device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x6 \
-chardev socket,id=charchannel1,server,nowait,path=qga.sock \
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 \
-vga qxl -monitor stdio -boot menu=on -vnc :2 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=24:be:05:15:d1:90 \

# nc -U /home/qga.sock
{"execute":"guest-get-fsinfo"}
{"return": [{"name": "vda1", "mountpoint": "/boot", "disk": [{"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev": "/dev/vda1", "target": 0}], "type": "xfs"}, {"name": "dm-0", "mountpoint": "/", "disk": [{"bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev": "/dev/vda2", "target": 0}], "type": "xfs"}]}

no found "serial" field.  only found new field "dev".  seems this bug isn't fixed.

Comment 14 Marc-Andre Lureau 2018-12-19 12:53:37 UTC
Most likely the device doesn't have ID_SERIAL property.

Can you provide "udevadm info /dev/vda2" output?

thanks

Comment 15 FuXiangChun 2018-12-20 09:59:31 UTC
# udevadm info /dev/vda2
P: /devices/pci0000:00/0000:00:03.0/virtio0/block/vda/vda2
N: vda2
S: disk/by-id/lvm-pv-uuid-PXcvg5-qL0c-WI0b-0tW0-wEvk-obZ1-PoIAs3
S: disk/by-id/virtio-MYDISK-1-part2
S: disk/by-path/pci-0000:00:03.0-part2
S: disk/by-path/virtio-pci-0000:00:03.0-part2
E: DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-PXcvg5-qL0c-WI0b-0tW0-wEvk-obZ1-PoIAs3 /dev/disk/by-id/virtio-MYDISK-1-part2 /dev/disk/by-path/pci-0000:00:03.0-part2 /dev/disk/by-path/virtio-pci-0000:00:03.0-part2
E: DEVNAME=/dev/vda2
E: DEVPATH=/devices/pci0000:00/0000:00:03.0/virtio0/block/vda/vda2
E: DEVTYPE=partition
E: ID_FS_TYPE=LVM2_member
E: ID_FS_USAGE=raid
E: ID_FS_UUID=PXcvg5-qL0c-WI0b-0tW0-wEvk-obZ1-PoIAs3
E: ID_FS_UUID_ENC=PXcvg5-qL0c-WI0b-0tW0-wEvk-obZ1-PoIAs3
E: ID_FS_VERSION=LVM2 001
E: ID_MODEL=LVM PV PXcvg5-qL0c-WI0b-0tW0-wEvk-obZ1-PoIAs3 on /dev/vda2
E: ID_PART_ENTRY_DISK=252:0
E: ID_PART_ENTRY_NUMBER=2
E: ID_PART_ENTRY_OFFSET=2099200
E: ID_PART_ENTRY_SCHEME=dos
E: ID_PART_ENTRY_SIZE=39843840
E: ID_PART_ENTRY_TYPE=0x8e
E: ID_PART_TABLE_TYPE=dos
E: ID_PATH=pci-0000:00:03.0
E: ID_PATH_TAG=pci-0000_00_03_0
E: ID_SERIAL=MYDISK-1
E: MAJOR=252
E: MINOR=2
E: SUBSYSTEM=block
E: SYSTEMD_ALIAS=/dev/block/252:2
E: SYSTEMD_READY=1
E: SYSTEMD_WANTS=lvm2-pvscan@252:2.service
E: TAGS=:systemd:
E: UDISKS_IGNORE=1
E: USEC_INITIALIZED=48614

Comment 16 Marc-Andre Lureau 2018-12-20 15:17:52 UTC
weird, on my rhel7 VM, I can reproduce, but ID_SERIAL is definetly missing. So that's expected.

I am a bit clueless now.

Comment 17 FuXiangChun 2019-01-02 02:06:17 UTC
(In reply to Marc-Andre Lureau from comment #16)
> weird, on my rhel7 VM, I can reproduce, but ID_SERIAL is definetly missing.
> So that's expected.
> 
> I am a bit clueless now.

According to comment13 & comment16, The bug isn't fixed. QE will move bug to "assigned". please correct me if I am wrong.

Comment 18 Marc-Andre Lureau 2019-01-02 07:14:55 UTC
As I don't know how to reproduce, can you provide me an access on the failing VM ?
thanks

Comment 20 Marc-Andre Lureau 2019-01-02 10:56:25 UTC
# sealert -l 441edee4-6c7b-447b-be45-8b03e932994a
SELinux is preventing /usr/bin/qemu-ga from read access on the file b252:1.

*****  Plugin catchall (100. confidence) suggests   **************************

If you believe that qemu-ga should be allowed read access on the b252:1 file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'qemu-ga' --raw | audit2allow -M my-qemuga
# semodule -i my-qemuga.pp


Additional Information:
Source Context                system_u:system_r:virt_qemu_ga_t:s0
Target Context                system_u:object_r:udev_var_run_t:s0
Target Objects                b252:1 [ file ]
Source                        qemu-ga
Source Path                   /usr/bin/qemu-ga
Port                          <Unknown>
Host                          localhost.localdomain
Source RPM Packages           
Target RPM Packages           
Policy RPM                    selinux-policy-3.13.1-228.el7.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     vm-74-84.lab.eng.pek2.redhat.com
Platform                      Linux vm-74-84.lab.eng.pek2.redhat.com
                              3.10.0-954.el7.x86_64 #1 SMP Mon Sep 24 16:18:49
                              EDT 2018 x86_64 x86_64
Alert Count                   8
First Seen                    2019-01-02 18:18:44 CST
Last Seen                     2019-01-02 18:54:27 CST
Local ID                      441edee4-6c7b-447b-be45-8b03e932994a

Raw Audit Messages
type=AVC msg=audit(1546426467.133:224): avc:  denied  { read } for  pid=2894 comm="qemu-ga" name="b252:1" dev="tmpfs" ino=20899 scontext=system_u:system_r:virt_qemu_ga_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=0


Hash: qemu-ga,virt_qemu_ga_t,udev_var_run_t,file,read

Comment 21 Marc-Andre Lureau 2019-01-02 10:57:26 UTC
reassigning to selinux-policy for help with SELinux issue and fix.

Thanks!

Comment 22 FuXiangChun 2019-01-03 02:19:45 UTC
In my understanding, If it is SELinux issue, we should file another new bug to track it. Current bug's component should be retained "qemu-guest-agent".

Comment 23 FuXiangChun 2019-01-03 08:33:16 UTC
Thanks Marc-Andre, I filed a new bug(bz1663109) to track selinux's issue. I also tested selinux is disabled scenario inside guest. I cann't get the right serial field(serial=MYDISK-2) yet. So, current bug status is correct. Maybe qemu-guest-agent-2.12.0-3.el7 package didn't fix this bug.

Comment 24 FuXiangChun 2019-01-04 02:02:45 UTC
Marc-Andre,

I cann't change component. Current bug's component should be qemu-guest-agent,right? If so, please help change component. and I found you filed a new bug(bz1663092) for selinux's issue.  You can Duplicate bz1663109 to your bz1663092.

Comment 29 FuXiangChun 2019-02-19 10:10:00 UTC
Marc-Andre,

I use the latest qemu-guest-agent(qemu-guest-agent-2.12.0-3.el7) to test this bug, and disabled selinux inside guest.  But still get this result as below. According to comment10, I didn't get the expected result, so I  didn't think this bug is fixed.  


/usr/libexec/qemu-kvm -M pc -cpu Opteron_G4 -nodefaults -smp 4 -m 4G -name rhel7.5-pc -drive file=/home/rhel7.6-954-scsi.qcow2,id=drive0,if=none -device virtio-blk-pci,drive=drive0,serial=MYDISK-1 -drive file=empty.qcow2,id=drive1,if=none -device virtio-blk-pci,drive=drive1,serial=MYDISK-2 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x6 -chardev socket,id=charchannel1,server,nowait,path=qga.sock -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -vga qxl -monitor stdio -boot menu=on -vnc :2 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown -device virtio-net-pci,netdev=hostnet0,id=net0,mac=24:be:05:15:d1:90

{"execute":"guest-get-fsinfo"}
{"return": [{"name": "vda1", "mountpoint": "/boot", "disk": [{"serial": "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev": "/dev/vda1", "target": 0}], "type": "xfs"}, {"name": "dm-0", "mountpoint": "/", "disk": [{"serial": "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev": "/dev/vda2", "target": 0}], "type": "xfs"}]}

Comment 30 Tomáš Golembiovský 2019-02-19 13:04:37 UTC
(In reply to FuXiangChun from comment #29)
> {"execute":"guest-get-fsinfo"}
> {"return": [{"name": "vda1", "mountpoint": "/boot", "disk": [{"serial":
> "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller":
> {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev": "/dev/vda1",
> "target": 0}], "type": "xfs"}, {"name": "dm-0", "mountpoint": "/", "disk":
> [{"serial": "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0,
> "pci-controller": {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev":
> "/dev/vda2", "target": 0}], "type": "xfs"}]}

This seems correct to me.

The reason you cannot see MYDISK-2 is that it probably is not mounted (has no filesystem). A limitation of the command is that it lists mounted file systems and serial IDs only for disks related to them.

We may need to address this in the future with separate RFE request.

Comment 31 FuXiangChun 2019-02-20 01:37:58 UTC
(In reply to Tomáš Golembiovský from comment #30)
> (In reply to FuXiangChun from comment #29)
> > {"execute":"guest-get-fsinfo"}
> > {"return": [{"name": "vda1", "mountpoint": "/boot", "disk": [{"serial":
> > "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller":
> > {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev": "/dev/vda1",
> > "target": 0}], "type": "xfs"}, {"name": "dm-0", "mountpoint": "/", "disk":
> > [{"serial": "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0,
> > "pci-controller": {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev":
> > "/dev/vda2", "target": 0}], "type": "xfs"}]}
> 
> This seems correct to me.
> 
> The reason you cannot see MYDISK-2 is that it probably is not mounted (has
> no filesystem). A limitation of the command is that it lists mounted file
> systems and serial IDs only for disks related to them.
> 
> We may need to address this in the future with separate RFE request.

Thanks for your explanation, I can get the expected result when the second disk is mounted. like this:

{"return": [{"name": "vdb", "mountpoint": "/mnt", "disk": [{"serial": "MYDISK-2", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 4, "domain": 0, "function": 0}, "dev": "/dev/vdb", "target": 0}], "type": "xfs"}, {"name": "vda1", "mountpoint": "/boot", "disk": [{"serial": "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev": "/dev/vda1", "target": 0}], "type": "xfs"}, {"name": "dm-0", "mountpoint": "/", "disk": [{"serial": "MYDISK-1", "bus-type": "virtio", "bus": 0, "unit": 0, "pci-controller": {"bus": 0, "slot": 3, "domain": 0, "function": 0}, "dev": "/dev/vda2", "target": 0}], "type": "xfs"}]}

so,move this bug to verified.

Comment 33 errata-xmlrpc 2019-08-06 12:51:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2124