Created attachment 1591737 [details] the script, domain xml&cmdline, origin data, leak graph Description of problem: As describe Version-Release number of selected component (if applicable): libvirt-5.5.0-1.module+el8.1.0+3580+d7f6488d.x86_64 qemu-kvm-4.0.0-5.module+el8.1.0+3622+5812d9bf.x86_64 How reproducible: 100% Steps to Reproduce: 1. Start a vm q35 2. Attach and detach the rbd disk for thousands of loops # ./main.py -e 'virsh attach-device q35 rbd.xml;sleep 1;virsh detach-device q35 rbd.xml' -p "`pidof qemu-kvm`" -c 3000 -i 0.5 3. After the script finished, check the memory usage graph of qemu-kvm Actual results: The trend of the graph is rising, which indicates the memory leak Expected results: The memory graph keep constant trend. Additional info:
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks
Can we retest this with RHEL-AV 8.3.0 - trying to reduce our backlog of old issues. Thanks.
Test with qemu-img-5.1.0-13.module+el8.3.0+8382+afc3bbea.x86_64, this issue still present, memory leakage is up to 53.5 MB every 1000 cycles. Version: qemu-img-5.1.0-13.module+el8.3.0+8382+afc3bbea.x86_64 libvirt-6.0.0-28.module+el8.3.0+7827+5e65edd7.x86_64 kernel-4.18.0-240.el8.x86_64 Steps to Reproduce: 1. Start a vm q35 2. Attach and detach the rbd disk for thousands of loops <disk type="network" device="disk"> <driver name="qemu" type="raw"/> <source protocol="rbd" name="rbd/rhel830-64-virtio-scsi.raw"> <host name="10.73.114.12" port="6789"/> </source> <target dev="vdb" bus="virtio"/> </disk> # ./main.py -e 'virsh attach-device q35 rbd.xml;sleep 1;virsh detach-device q35 rbd.xml' -p "`pidof qemu-kvm`" -c 3000 -i 0.5 3. After the script finished, check the memory usage graph of qemu-kvm Actual results: The trend of the graph is rising, which indicates the memory leak up to 50MB. Expected results: The memory graph keep constant trend. NO memory leakage.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.
Test with qemu-kvm-5.2.0-14.module+el8.4.0+10425+ad586fa5.x86_64, this issue still present, memory leakage is up to 50.8 MB after 1500 cycles. Version: qemu-kvm-5.2.0-14.module+el8.4.0+10425+ad586fa5.x86_64 libvirt-7.0.0-10.module+el8.4.0+10417+37f6984d.x86_64 kernel-4.18.0-298.el8.x86_64 Steps to Reproduce: 1. Start a vm q35 2. Attach and detach the rbd disk for thousands of loops <disk type="network" device="disk"> <driver name="qemu" type="raw"/> <source protocol="rbd" name="rbd/rhel840-64-virtio-scsi.raw"> <host name="10.73.114.12" port="6789"/> </source> <target dev="vdb" bus="virtio"/> </disk> # ./main.py -e 'virsh destroy rhel83_sev;sleep 1;virsh start rhel83_sev;sleep 1;virsh attach-device rhel83_sev rbd.xml;sleep 1;virsh detach-device rhel83_sev rbd.xml' -p "`pidof qemu-kvm`" -c 1500 -i 0.5 I would like to reopen this bug, as libvirt can still reproduce the issue.
(In reply to zixchen from comment #10) > > I would like to reopen this bug, as libvirt can still reproduce the issue. Agree, I'll take a look in the next weeks.
Test with qemu6.0, the memory leak reduces to around 10MB. I will try to test memory leak on attaching/detaching local disk image as a comparison. Version: qemu-kvm-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64 kernel-4.18.0-314.el8.x86_64 libvirt-7.4.0-1.module+el8.5.0+11218+83343022.x86_64 Steps to Reproduce: 1. Start a vm q35 2. Attach and detach the rbd disk for thousands of loops <disk type="network" device="disk"> <driver name="qemu" type="raw"/> <source protocol="rbd" name="rbd/rhel840-64-virtio-scsi.raw"> <host name="$ip" port="6789"/> </source> <target dev="vdb" bus="virtio"/> </disk> # ./main.py -e 'virsh attach-device rhel85 file.xml;virsh detach-device rhel85 file.xml;sleep 2' -p "`pidof qemu-kvm`" -c 1000 -i 0.5 Actual result: From memory leak graph, there is around 10MB leak in the first part of the test, but the memory data is steady in the last part of the graph. please see attachment qemu6.0_rbd_leak.svg. Expected result: The memory graph keep constant trend.
Created attachment 1792961 [details] rbd leak graph with qemu6.0
I did more tests on memory leak of rbd attach/detach, the memory leak issue has been improved a lot, from the graph(the attachment qemu-kvm-ceph-qemu6.0.svg), repeatedly attach/detach rbd disk doesn't has clear memory leak, but a sudden change of 10 MB memory is observed, this could be another issue. Besides, I test 1000 attaching/detaching with local disk image, the graph is very steady but also a sudden change is observed,please check attachment qemu-kvm-local-image.svg Therefore, I am ok to close this bug, and we could track the sudden change issue in another bug. Thanks. Version: qemu-kvm-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64 kernel-4.18.0-314.el8.x86_64 libvirt-7.4.0-1.module+el8.5.0+11218+83343022.x86_64 Steps: 1. Prepare local image xml: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/home/data.raw'/> <target dev='vdb' bus='scsi'/> </disk> 2. Attach/detach 1000 times: # ./main.py -e 'virsh attach-device rhel85 file.xml;virsh detach-device rhel85 file.xml;sleep 2' -p "`pidof qemu-kvm`" -c 1000 -i 0.5 Actual result: Overall, the memory graph is steady, but a sudden change is observed. Expected result: should not has the sudden change based on the actual result.
Created attachment 1793322 [details] qemu-kvm-local-image.svg
Created attachment 1793323 [details] qemu-kvm-ceph-qemu6.0.svg
Create another bug to track memory sudden change issue: Bug 1975640 - Memory sudden change when attaching/detaching disk image repeatedly
As the issue has already been fixed in qemu-kvm-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64, close it as current release.