Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
During testing glusterfs performance with fuse bypass, there is memory leak. Btw, it works well for fuse mount. The issue happened on both virtio-blk driver and virtio-scsi driver.
Version-Release number of selected component (if applicable):
qemu-kvm-0.12.1.2-2.401.el6.x86_64
kernel-2.6.32-418.el6.x86_64
glusterfs-3.4.0.24rhs-1.el6rhs.x86_64
How reproducible:
1/4
Steps to Reproduce:
1. Testbed:
- Hardware: 1 client (4CPU * 8GB); 2 server (8CPU * 16GB );
private network is 1-Gbit
- Setup: 1 Gluster volume made up of 1 brick (on SSD) from each server;
single replication enabled
- Client KVM image: 2VCPUs * 4GB RAM; cache=one; aio=threads
2. Create image with fuse bypass on gluster client.
#/usr/bin/qemu-img create -f raw gluster://192.168.0.17:24007/gv1/storage2.raw 40G
3. Boot guest with data disk.
# /usr/libexec/qemu-kvm \
-drive file='/home/RHEL-Server-6.5-64.raw',if=none,id=virtio-scsi0-id0,media=disk,cache=none,snapshot=off,format=raw,aio=threads \
-device scsi-hd,drive=virtio-scsi0-id0 \
-drive file='gluster://192.168.0.17:24007/gv1/storage2.raw',if=none,id=virtio-scsi2-id1,media=disk,cache=none,snapshot=off,format=raw,aio=threads \
-m 4096 \
-smp 2,maxcpus=2,cores=1,threads=1,sockets=2 \
...
4. In guest
# i=`/bin/ls /dev/[vs]db` && mkfs.ext4 $i -F > /dev/null; partprobe; umount /mnt; mount $i /mnt && echo 3 > /proc/sys/vm/drop_caches && sleep 3
# fio --rw=%s --bs=%s --iodepth=%s --runtime=1m --direct=1 --filename=/mnt/%s --name=job1 --ioengine=libaio --thread --group_reporting --numjobs=16 --size=512MB --time_based --ioscheduler=deadline
Actual results:
- Before running job, host memory shows:
# free -m
total used free shared buffers cached
Mem: 7615 177 7437 0 8 34
-/+ buffers/cache: 134 7480
Swap: 2047 0 2047
- The whole job keeps running about one hour, after running about 30 minutes,the free memory on host decreases to ~200M and host hangs at last.
- Please refer to the log:
http://kvm-perf.englab.nay.redhat.com/results/3510-autotest/dell-op780-06.qe.lab.eng.nay.redhat.com/debug/client.0.log
Expected results:
There is no memory leak.
Additional info:
(In reply to Asias He from comment #2)
> Xiaomei,I can not reproduce this with gluster 3.4.0.34 on my test machine.
> Could you test against the latest gluster package.
I could still reproduce the issue on latest version.
- Host version
kernel-2.6.32-431.el6.x86_64
qemu-kvm-0.12.1.2-2.415.el6_5.3.x86_64
glusterfs-libs-3.4.0.36rhs-1.el6.x86_64
glusterfs-api-3.4.0.36rhs-1.el6.x86_64
glusterfs-3.4.0.36rhs-1.el6.x86_64
- Guest version
kernel-2.6.32-431.el6.x86_64
- Before running fio test
[root@dell-op780-06 ~]# free -m
total used free shared buffers cached
Mem: 7615 424 7190 0 6 44
-/+ buffers/cache: 372 7242
Swap: 2047 0 2047
- After running fio test
[root@dell-op780-06 ~]# free -m
total used free shared buffers cached
Mem: 7615 7514 100 0 0 17
-/+ buffers/cache: 7496 119
Swap: 2047 528 1519
This is almost certainly an issue in libglusterfs, rather than qemu itself (both the leak, and comment #5).
For the issue described in comment #5, that sounds like bug #1010638.
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.
The official life cycle policy can be reviewed here:
http://redhat.com/rhel/lifecycle
This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:
https://access.redhat.com/
Description of problem: During testing glusterfs performance with fuse bypass, there is memory leak. Btw, it works well for fuse mount. The issue happened on both virtio-blk driver and virtio-scsi driver. Version-Release number of selected component (if applicable): qemu-kvm-0.12.1.2-2.401.el6.x86_64 kernel-2.6.32-418.el6.x86_64 glusterfs-3.4.0.24rhs-1.el6rhs.x86_64 How reproducible: 1/4 Steps to Reproduce: 1. Testbed: - Hardware: 1 client (4CPU * 8GB); 2 server (8CPU * 16GB ); private network is 1-Gbit - Setup: 1 Gluster volume made up of 1 brick (on SSD) from each server; single replication enabled - Client KVM image: 2VCPUs * 4GB RAM; cache=one; aio=threads 2. Create image with fuse bypass on gluster client. #/usr/bin/qemu-img create -f raw gluster://192.168.0.17:24007/gv1/storage2.raw 40G 3. Boot guest with data disk. # /usr/libexec/qemu-kvm \ -drive file='/home/RHEL-Server-6.5-64.raw',if=none,id=virtio-scsi0-id0,media=disk,cache=none,snapshot=off,format=raw,aio=threads \ -device scsi-hd,drive=virtio-scsi0-id0 \ -drive file='gluster://192.168.0.17:24007/gv1/storage2.raw',if=none,id=virtio-scsi2-id1,media=disk,cache=none,snapshot=off,format=raw,aio=threads \ -m 4096 \ -smp 2,maxcpus=2,cores=1,threads=1,sockets=2 \ ... 4. In guest # i=`/bin/ls /dev/[vs]db` && mkfs.ext4 $i -F > /dev/null; partprobe; umount /mnt; mount $i /mnt && echo 3 > /proc/sys/vm/drop_caches && sleep 3 # fio --rw=%s --bs=%s --iodepth=%s --runtime=1m --direct=1 --filename=/mnt/%s --name=job1 --ioengine=libaio --thread --group_reporting --numjobs=16 --size=512MB --time_based --ioscheduler=deadline Actual results: - Before running job, host memory shows: # free -m total used free shared buffers cached Mem: 7615 177 7437 0 8 34 -/+ buffers/cache: 134 7480 Swap: 2047 0 2047 - The whole job keeps running about one hour, after running about 30 minutes,the free memory on host decreases to ~200M and host hangs at last. - Please refer to the log: http://kvm-perf.englab.nay.redhat.com/results/3510-autotest/dell-op780-06.qe.lab.eng.nay.redhat.com/debug/client.0.log Expected results: There is no memory leak. Additional info: