Bug 1400059
Summary: | block-gluster: use one glfs instance per volume | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Prasanna Kumar Kalever <prasanna.kalever> | |
Component: | qemu-kvm-rhev | Assignee: | Jeff Cody <jcody> | |
Status: | CLOSED ERRATA | QA Contact: | Suqin Huang <shuang> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | high | |||
Version: | 7.4 | CC: | aliang, bmcclain, chayang, coli, hachen, juzhang, knoel, meyang, michen, mrezanin, ngu, pingl, prasanna.kalever, sabose, snagar, virt-maint, xuwei | |
Target Milestone: | rc | Keywords: | ZStream | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | qemu 2.8.0 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1413044 (view as bug list) | Environment: | ||
Last Closed: | 2017-08-01 23:39:45 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1413044 |
Description
Prasanna Kumar Kalever
2016-11-30 11:36:35 UTC
Needs a back port [1] of commit 6349c15410361d3fe52c9beee309954d606f8ccd Refs: v2.7.0-1742-g6349c15 Author: Prasanna Kumar Kalever <prasanna.kalever> AuthorDate: Thu Oct 27 20:54:50 2016 +0530 Commit: Jeff Cody <jcody> CommitDate: Tue Nov 1 07:55:57 2016 -0400 block/gluster: memory usage: use one glfs instance per volume Currently, for every drive accessed via gfapi we create a new glfs instance (call glfs_new() followed by glfs_init()) which could consume memory in few 100 MB's, from the table below it looks like for each instance ~300 MB VSZ was consumed Before: ------- Disks VSZ RSS 1 1098728 187756 2 1430808 198656 3 1764932 199704 4 2084728 202684 This patch maintains a list of pre-opened glfs objects. On adding a new drive belonging to the same gluster volume, we just reuse the existing glfs object by updating its refcount. With this approch we shrink up the unwanted memory consumption and glfs_new/glfs_init calls for accessing a disk (file) if belongs to same volume. From below table notice that the memory usage after adding a disk (which will reuse the existing glfs object hence) is in negligible compared to before. After: ------ Disks VSZ RSS 1 1101964 185768 2 1109604 194920 3 1114012 196036 4 1114496 199868 Disks: number of -drive VSZ: virtual memory size of the process in KiB RSS: resident set size, the non-swapped physical memory (in kiloBytes) VSZ and RSS are analyzed using 'ps aux' utility. Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever> Reviewed-by: Jeff Cody <jcody> Message-id: 1477581890-4811-1-git-send-email-prasanna.kalever Signed-off-by: Jeff Cody <jcody> [1] https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg07087.html Patches should be in QEMU 2.8.0 hi Prasanna, i am qe from kvm team, i want to try to reproduce this issue on rhel7.3.z, could you show me a more detailed steps ,thanks. Hi Yang, I don't have the commands built currently, but can help you with steps Without this Patch: 1. Create a gluster volume. 2. Copy a the VM files and additional disk images to the volume. 3. Run QEMU with VM image and addition drives (lets say 4 disks) with gluster libgfapi interface. Uri looks like gluster://hostname/volname/image.qcow2. 4. Note down the memory usage by the qemu process. Now upgrade to a version which has Patch: Repeat 1..4 steps You will certainly notice the difference in the memory usage. Note: The VM file and extra disk files should reside on the same gluster volume. Make sure you access the VM file and other disk files only via gfapi interface (not with fuse/local mount) -- Prasanna Hi Prasanna, I didn't see big different between qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64 and qemu-kvm-rhev-2.8.0-1.el7.x86_64 according to Comment9, is it acceptable. Thanks Suqin Huang vsz rss os disk + 1 disk 9336152 1520476 os disk + 2 disk 9447136 1519720 os disk + 3 disk 9566716 1578592 os disk + 4 disk 9684400 1654564 instance ~200 MB VSZ was consumed the issue is ok for me now, thanks Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:2392 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:2392 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:2392 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:2392 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:2392 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:2392 |