Bug 2040272
| Summary: | [RFE] Allow passing file descriptors to qemu for disks on startup and for hotplug | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Roman Mohr <rmohr> | ||||
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> | ||||
| libvirt sub component: | Storage | QA Contact: | Han Han <hhan> | ||||
| Status: | CLOSED ERRATA | Docs Contact: | |||||
| Severity: | high | ||||||
| Priority: | medium | CC: | chhu, chwen, dzheng, jdenemar, jsuchane, kwolf, lcheng, lmen, pkrempa, sgott, vgoyal, virt-maint, xuzhang | ||||
| Version: | unspecified | Keywords: | AutomationTriaged, FutureFeature, Triaged | ||||
| Target Milestone: | beta | Flags: | pm-rhel:
mirror+
|
||||
| Target Release: | 9.1 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | libvirt-9.0.0-3.el9 | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2023-05-09 07:26:11 UTC | Type: | Feature Request | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | 9.0.0 | ||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 2040235, 2040625 | ||||||
| Attachments: |
|
||||||
|
Description
Roman Mohr
2022-01-13 10:47:19 UTC
@rmohr , what is the requested target for this feature to be available for CNV? in other words, by what RHEL9.x release you expect to be able to make use of it? Thanks (In reply to Klaus Heinrich Kiwi from comment #1) > @rmohr , what is the requested target for this feature to be > available for CNV? in other words, by what RHEL9.x release you expect to be > able to make use of it? Thanks From my perspective: The sooner we have it the faster we can start the integration work, since we can only start changing CNV once we have this. This is also not a trivial effort. Regarding to the exact target release, Stu can you provide a target? Peter, what are the perspectives for having this as part of the next libvirt, and included in RHEL 9.1? Next libvirt upstream release (8.1.0) is going into freeze this week, so that's not possible. I expect either the release after that or one more, but that's both in scope for rhel-9.1 The feature was added upstream by the following commits: d7e9093502 qemu: Fix handling of passed FDs in remoteDispatchDomainFdAssociate fe6077585e qemuxml2*test: Enable testing of disks with 'fdgroup' 894fe89484 qemu: Enable support for FD passed disk sources a575aa280d qemu: cgroup: Don't setup cgroups for FD-passed images dc20b1d774 qemu: driver: Don't allow certain operations with FD-passed disks 7ce63d5a07 qemu: Prepare storage backing chain traversal code for FD passed images 6f3d13bfbd security: selinux: Handle security labelling of FD-passed images 7fceb5e168 secuirity: DAC: Don't relabel FD-passed virStorageSource images 74f3f4b93c qemu: block: Add support for passing FDs of disk images 81cbfc2fc3 qemu: Prepare data for FD-passed disk image sources 47b922f3f8 conf: storage_source: Introduce virStorageSourceIsFD 4c9ce062d3 qemu: domain: Introduce qemuDomainStartupCleanup 98bd201678 conf: Add 'fdgroup' attribute for 'file' disks 0fcdb512d4 qemuxml2argvtest: Add support for populating 'fds' in private data f762f87534 qemu: Implement qemuDomainFDAssociate e2670a63d2 conf: storage_source: Introduce type for storing FDs associated for storage 3ea4170551 virsh: Introduce 'dom-fd-associate' for invoking virDomainFDAssociate() abd9025c2f lib: Introduce virDomainFDAssociate API 608c4b249e qemuxml2xmltest: Remove 'disk-backing-chain' case and output files e2b36febdf qemuxml2argvtest: Add seclabels in <backingStore> to disk-backing-chains-(no)index 75a7a3b597 virStorageSourceIsSameLocation: Use switch statement for individual storage types 08406591ce remote_driver: Refactor few functions as example of auto-locking 8d7e3a723d remote_driver: Return 'virLockGuard' from 'remoteDriverLock' 1be393d9ad gendispatch: Add 'G_GNUC_WARN_UNUSED_RESULT' to output of 'aclheader' aa47051bf4 virclosecallbacks: Remove old close callbacks code 38607ea891 qemuMigrationSrcBeginResumePhase: Remove unused 'driver' argument 8187c0ed94 qemuMigrationSrcIsAllowed: Remove unused 'driver' argument aa8e187fa9 qemu: Use new connection close callbacks API ba6f53d778 bhyve: Use new connection close callbacks API e74bb402e4 lxc: Use new connection close callbacks API cb195c19b7 virclosecallbacks: Add new close callbacks APIs 2cb13113c2 conf: domain: Add helper infrastructure for new connection close callbacks e88593ba39 conf: virdomainobjlist: Remove return value from virDomainObjListCollect cd3599c876 conf: virdomainobjlist: Introduce 'virDomainObjListCollectAll' f52bc2d54a conf: virdomainobjlist: Convert header to contemporary style 0cd318ce16 datatypes: Clean up whitespace in definition of struct _virConnect 3de56902d3 datatypes: Simplify error path of 'virGetDomain' v9.0.0-rc1-4-gd7e9093502 Run basic tests on libvirt libvirt-9.0.0-1.el9.x86_64 python3-libvirt-9.0.0-1.el9.x86_64 qemu-kvm-7.2.0-5.el9.x86_64
1. Associate the FD of disk to a domain
2. Start the domain with that disk
3. Detach the disk
#!/usr/bin/python3
from os import path
import subprocess
import time
from lxml import etree as et
from io import StringIO
import libvirt
DOM = 'rhel-ovmf-9.2'
FILE = '/tmp/vdb'
FDGROUP = 'test'
DISK_XML_TEMPL = '''<disk type="file" device="disk">
<driver name="qemu" type="raw"/>
<source file="{0}" fdgroup="{2}"/>
<backingStore/>
<target dev="{1}" bus="virtio"/>
</disk>'''
subprocess.run("qemu-img create {0} 100M".format(FILE).split())
with libvirt.open() as conn:
domain = conn.lookupByName(DOM)
with open(FILE, "w+b") as f_obj:
fds = [f_obj.fileno()]
domain.FDAssociate(FDGROUP, fds, libvirt.VIR_DOMAIN_FD_ASSOCIATE_SECLABEL_RESTORE | libvirt.VIR_DOMAIN_FD_ASSOCIATE_SECLABEL_WRITABLE)
disk_xml = DISK_XML_TEMPL.format(FILE, path.basename(FILE), FDGROUP)
domain.attachDeviceFlags(disk_xml, libvirt.VIR_DOMAIN_AFFECT_CONFIG)
domain.create()
time.sleep(50)
domain.detachDevice(disk_xml)
domain.detachDeviceFlags(disk_xml, libvirt.VIR_DOMAIN_AFFECT_CONFIG)
The running VM vdb disk XML is:
<disk type="file" device="disk">
<driver name="qemu" type="raw"/>
<source file="/tmp/vdb" fdgroup="test" index="1"/>
<backingStore/>
<target dev="vdb" bus="virtio"/>
<alias name="virtio-disk1"/>
<address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
</disk>
Work as expected
Created attachment 1941037 [details]
the log, xml and script for hot-plug
Run it as run-fd_associate.py:
1. Open the disk file
2. Create a VM
3. Assign the fd to VM by FDAssociate
4. Attach the disk to VM
The results:
Formatting '/tmp/vdb', fmt=raw size=104857600
libvirt: QEMU Driver error : internal error: unable to execute QEMU command 'blockdev-add': Could not dup FD for /dev/fdset/1 flags 2: No such file or directory
Traceback (most recent call last):
File "/root/./run-fd_associate.py", line 33, in <module>
domain.attachDeviceFlags(disk_xml, 0)
File "/usr/lib64/python3.9/site-packages/libvirt.py", line 716, in attachDeviceFlags
raise libvirtError('virDomainAttachDeviceFlags() failed')
libvirt.libvirtError: internal error: unable to execute QEMU command 'blockdev-add': Could not dup FD for /dev/fdset/1 flags 2: No such file or directory
Peter, please check the results of comment11. Version: libvirt-9.0.0-2.el9.x86_64 python3-libvirt-9.0.0-1.el9.x86_64 qemu-kvm-7.2.0-5.el9.x86_64 Oops, I must have misplaced the hunk which actually passes the FDs on hotplug. Do you want to file another BZ to track this part? (In reply to Peter Krempa from comment #13) > Oops, I must have misplaced the hunk which actually passes the FDs on > hotplug. Do you want to file another BZ to track this part? Not needed. Please just fix it here and update the "Fixed In Version" Fixes for hotplug pushed upstream: 3b8d669d55 qemu: block: Properly handle FD-passed disk hot-(un-)plug f730b1e4f2 qemu: domain: Store fdset ID for disks passed to qemu via FD 5598c10c64 qemu: fd: Add helpers allowing storing FD set data in status XML 3b7b201b95 qemuFDPassTransferCommand: Mark that FD was passed 65f14232fb qemu: command: Handle FD passing commandline via qemuBuildBlockStorageSourceAttachDataCommandline 531adf3274 qemuStorageSourcePrivateDataFormat: Rename 'tmp' to 'objectsChildBuf' 51dc38fe31 qemu_fd: Remove declaration for 'qemuFDPassNewDirect' Hot-plug test passes as comment11 on libvirt-9.0.0-3.el9.x86_64 qemu-kvm-7.2.0-6.el9.x86_64 Hi Peter, are there anyways to test hot-plug/vm create/migrate with dom-fd-associate by virsh?
I test as the following on libvirt-9.0.0-4.el9.x86_64 qemu-kvm-7.2.0-8.el9.x86_64. But it doesn't work for hot-plug:
➜ ~ cat /tmp/vdb.xml
<disk type="file" device="disk">
<driver name="qemu" type="raw"/>
<source file="/tmp/vdb" fdgroup="test"/>
<backingStore/>
<target dev="vdb" bus="virtio"/>
</disk>
➜ ~ virsh list
Id Name State
--------------------------
2 rhel-9.2 running
➜ ~ exec 3<> /tmp/vdb
➜ ~ virsh -k0 -K0 dom-fd-associate rhel-9.2 test 3
➜ ~ virsh -k0 -K0 attach-device rhel-9.2 /tmp/vdb.xml
error: Failed to attach device from /tmp/vdb.xml
error: invalid argument: file descriptor group 'test' was not associated with the domain
➜ ~ lsof /tmp/vdb
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
zsh 4078 root 3u REG 252,4 104857600 25176394 /tmp/vdb
I checked the description of virDomainFDAssociate(https://gitlab.com/libvirt/libvirt/-/blob/master/src/libvirt-domain.c#L13985), it says:
"The FDs are associated as long as the connection used to associated exists and are disposed of afterwards."
So for virsh, is there any way to keep the connection of dom-fd-associate and use it afterwards?
(In reply to Han Han from comment #21) > Hi Peter, are there anyways to test hot-plug/vm create/migrate with > dom-fd-associate by virsh? [...] > "The FDs are associated as long as the connection used to associated exists > and are disposed of afterwards." > > So for virsh, is there any way to keep the connection of dom-fd-associate > and use it afterwards? With 'virsh' you have to use the interactive mode or batch multiple commands at once e.g.: # virsh "dom-fd-associate --domain cd --name testcd --pass-fds 4 ; start cd" 4<>/tmp/ble For migration you need to remember that the FDs need to be associated with the destination daemon, but virsh initiates the migration from the source side, so you'll need to have another instance of virsh. For disk attaching and creating VM with fdgroup, test them on libvirt-9.0.0-5.el9.x86_64 qemu-kvm-7.2.0-8.el9.x86_64, PASS. For migration, save&restore, managedsave, and VM start. Test them as the following: 1. migration 1.0 Open the disk file as fd: (src)➜ ~ exec 3<> /mnt/vdb 1.1. On the src host, start a VM with disks on shared nfs storage, with fdgroup (src)➜ ~ virsh "dom-fd-associate rhel-9.2 test 3 --seclabel-restore --seclabel-writable; start rhel" Domain 'rhel' started 1.2. Define the same VM on dst host and associate the fd with the just defined VM. The keep the connection (src) ➜ ~ virsh dumpxml rhel > /mnt/rhel.xml (dst)➜ ~ exec 3<> /mnt/vdb (dst)➜ ~ virsh define /mnt/rhel.xml Domain 'rhel' defined from /mnt/rhel.xml (dst)➜ ~ virsh virsh # dom-fd-associate rhel test 3 1.3 Migrate the VM to the dst host: (src) ➜ ~ virsh migrate rhel qemu+ssh://vm-10-0-79-60.hosted.upshift.rdu2.redhat.com/system --live --verbose --p2p Migration: [100 %] 2. Keep the connection of dom-fd-associate. Test managedsave & start (dst)➜ ~ virsh managedsave rhel Domain 'rhel' state saved by libvirt (dst)➜ ~ virsh start rhel Domain 'rhel' started ➜ ~ virsh dumpxml rhel --xpath //disk <disk type="file" device="disk"> <driver name="qemu" type="qcow2"/> <source file="/mnt/rhel.qcow2" index="2"/> <backingStore/> <target dev="vda" bus="virtio"/> <alias name="virtio-disk0"/> <address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/> </disk> <disk type="file" device="disk"> <driver name="qemu" type="raw"/> <source file="/mnt/vdb" fdgroup="test" index="1"/> <backingStore/> <target dev="vdb" bus="virtio"/> <alias name="virtio-disk1"/> <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/> </disk> 3. Keep the fd-associate connection and test save&restore ➜ ~ virsh save rhel /tmp/rhel Domain 'rhel' saved to /tmp/rhe ➜ ~ virsh restore /tmp/rhel Domain restored from /tmp/rhel ➜ ~ virsh dumpxml rhel --xpath //disk <disk type="file" device="disk"> <driver name="qemu" type="qcow2"/> <source file="/mnt/rhel.qcow2" index="2"/> <backingStore/> <target dev="vda" bus="virtio"/> <alias name="virtio-disk0"/> <address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/> </disk> <disk type="file" device="disk"> <driver name="qemu" type="raw"/> <source file="/mnt/vdb" fdgroup="test" index="1"/> <backingStore/> <target dev="vdb" bus="virtio"/> <alias name="virtio-disk1"/> <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/> </disk> Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (libvirt bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:2171 |