Bug 1731078 - Memory leak when attach/detach rbd disk
Summary: Memory leak when attach/detach rbd disk
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.5
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: ---
Assignee: Stefano Garzarella
QA Contact: zixchen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-18 09:48 UTC by Han Han
Modified: 2021-06-24 07:04 UTC (History)
11 users (show)

Fixed In Version: qemu-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-24 07:04:12 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
the script, domain xml&cmdline, origin data, leak graph (303.15 KB, application/gzip)
2019-07-18 09:48 UTC, Han Han
no flags Details
rbd leak graph with qemu6.0 (264.55 KB, image/svg+xml)
2021-06-22 10:05 UTC, zixchen
no flags Details
qemu-kvm-local-image.svg (126.70 KB, image/svg+xml)
2021-06-23 05:49 UTC, zixchen
no flags Details
qemu-kvm-ceph-qemu6.0.svg (242.56 KB, image/svg+xml)
2021-06-23 05:50 UTC, zixchen
no flags Details

Description Han Han 2019-07-18 09:48:31 UTC
Created attachment 1591737 [details]
the script, domain xml&cmdline, origin data, leak graph

Description of problem:
As describe

Version-Release number of selected component (if applicable):
libvirt-5.5.0-1.module+el8.1.0+3580+d7f6488d.x86_64
qemu-kvm-4.0.0-5.module+el8.1.0+3622+5812d9bf.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Start a vm q35
2. Attach and detach the rbd disk for thousands of loops
# ./main.py -e 'virsh attach-device q35 rbd.xml;sleep 1;virsh detach-device q35 rbd.xml' -p "`pidof qemu-kvm`" -c 3000 -i 0.5

3. After the script finished, check the memory usage graph of qemu-kvm

Actual results:
The trend of the graph is rising, which indicates the memory leak

Expected results:
The memory graph keep constant trend.

Additional info:

Comment 4 Ademar Reis 2020-02-05 23:01:03 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 6 John Ferlan 2020-10-02 20:55:13 UTC
Can we retest this with RHEL-AV 8.3.0 - trying to reduce our backlog of old issues.  Thanks.

Comment 7 zixchen 2020-10-13 14:28:36 UTC
Test with qemu-img-5.1.0-13.module+el8.3.0+8382+afc3bbea.x86_64, this issue still present, memory leakage is up to 53.5 MB every 1000 cycles.

Version:
qemu-img-5.1.0-13.module+el8.3.0+8382+afc3bbea.x86_64
libvirt-6.0.0-28.module+el8.3.0+7827+5e65edd7.x86_64
kernel-4.18.0-240.el8.x86_64

Steps to Reproduce:
1. Start a vm q35
2. Attach and detach the rbd disk for thousands of loops
<disk type="network" device="disk">
<driver name="qemu" type="raw"/>
<source protocol="rbd" name="rbd/rhel830-64-virtio-scsi.raw">
<host name="10.73.114.12" port="6789"/>
</source>
<target dev="vdb" bus="virtio"/>
</disk>
# ./main.py -e 'virsh attach-device q35 rbd.xml;sleep 1;virsh detach-device q35 rbd.xml' -p "`pidof qemu-kvm`" -c 3000 -i 0.5

3. After the script finished, check the memory usage graph of qemu-kvm

Actual results:
The trend of the graph is rising, which indicates the memory leak up to 50MB.

Expected results:
The memory graph keep constant trend. NO memory leakage.

Comment 9 RHEL Program Management 2021-03-15 07:37:38 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 10 zixchen 2021-03-25 06:41:12 UTC
Test with qemu-kvm-5.2.0-14.module+el8.4.0+10425+ad586fa5.x86_64, this issue still present, memory leakage is up to 50.8 MB after 1500 cycles.

Version:
qemu-kvm-5.2.0-14.module+el8.4.0+10425+ad586fa5.x86_64
libvirt-7.0.0-10.module+el8.4.0+10417+37f6984d.x86_64
kernel-4.18.0-298.el8.x86_64

Steps to Reproduce:
1. Start a vm q35
2. Attach and detach the rbd disk for thousands of loops
<disk type="network" device="disk">
<driver name="qemu" type="raw"/>
<source protocol="rbd" name="rbd/rhel840-64-virtio-scsi.raw">
<host name="10.73.114.12" port="6789"/>
</source>
<target dev="vdb" bus="virtio"/>
</disk>
# ./main.py -e 'virsh destroy rhel83_sev;sleep 1;virsh start rhel83_sev;sleep 1;virsh attach-device rhel83_sev rbd.xml;sleep 1;virsh detach-device rhel83_sev rbd.xml' -p "`pidof qemu-kvm`" -c 1500 -i 0.5


I would like to reopen this bug, as libvirt can still reproduce the issue.

Comment 11 Stefano Garzarella 2021-03-25 08:33:23 UTC
(In reply to zixchen from comment #10)
> 
> I would like to reopen this bug, as libvirt can still reproduce the issue.

Agree, I'll take a look in the next weeks.

Comment 19 zixchen 2021-06-22 10:00:43 UTC
Test with qemu6.0, the memory leak reduces to around 10MB. I will try to test memory leak on attaching/detaching local disk image as a comparison.

Version:
qemu-kvm-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64
kernel-4.18.0-314.el8.x86_64
libvirt-7.4.0-1.module+el8.5.0+11218+83343022.x86_64


Steps to Reproduce:
1. Start a vm q35
2. Attach and detach the rbd disk for thousands of loops
<disk type="network" device="disk">
<driver name="qemu" type="raw"/>
<source protocol="rbd" name="rbd/rhel840-64-virtio-scsi.raw">
<host name="$ip" port="6789"/>
</source>
<target dev="vdb" bus="virtio"/>
</disk>
# ./main.py -e 'virsh attach-device rhel85 file.xml;virsh detach-device rhel85 file.xml;sleep 2' -p "`pidof qemu-kvm`" -c 1000 -i 0.5

Actual result:
From memory leak graph, there is around 10MB leak in the first part of the test, but the memory data is steady in the last part of the graph. please see attachment qemu6.0_rbd_leak.svg.

Expected result:
The memory graph keep constant trend.

Comment 20 zixchen 2021-06-22 10:05:02 UTC
Created attachment 1792961 [details]
rbd leak graph with qemu6.0

Comment 21 zixchen 2021-06-23 05:48:03 UTC
I did more tests on memory leak of rbd attach/detach, the memory leak issue has been improved a lot, from the graph(the attachment qemu-kvm-ceph-qemu6.0.svg), repeatedly attach/detach rbd disk doesn't has clear memory leak, but a sudden change of 10 MB memory is observed, this could be another issue. Besides, I test 1000 attaching/detaching with local disk image, the graph is very steady but also a sudden change is observed,please check attachment qemu-kvm-local-image.svg Therefore, I am ok to close this bug, and we could track the sudden change issue in another bug. Thanks.

Version:
qemu-kvm-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64
kernel-4.18.0-314.el8.x86_64
libvirt-7.4.0-1.module+el8.5.0+11218+83343022.x86_64

Steps:
1. Prepare local image xml:
<disk type='file' device='disk'>
  <driver name='qemu' type='raw' cache='none'/>
  <source file='/home/data.raw'/>
  <target dev='vdb' bus='scsi'/>
</disk>
2. Attach/detach 1000 times:
# ./main.py -e 'virsh attach-device rhel85 file.xml;virsh detach-device rhel85 file.xml;sleep 2' -p "`pidof qemu-kvm`" -c 1000 -i 0.5

Actual result:
Overall, the memory graph is steady, but a sudden change is observed.

Expected result:
should not has the sudden change based on the actual result.

Comment 22 zixchen 2021-06-23 05:49:10 UTC
Created attachment 1793322 [details]
qemu-kvm-local-image.svg

Comment 23 zixchen 2021-06-23 05:50:01 UTC
Created attachment 1793323 [details]
qemu-kvm-ceph-qemu6.0.svg

Comment 25 zixchen 2021-06-24 07:00:31 UTC
Create another bug to track memory sudden change issue: Bug 1975640 - Memory sudden change when attaching/detaching disk image repeatedly

Comment 26 zixchen 2021-06-24 07:04:12 UTC
As the issue has already been fixed in qemu-kvm-6.0.0-19.module+el8.5.0+11385+6e7d542e.x86_64, close it as current release.


Note You need to log in before you can comment on or make changes to this bug.