Bug 1319332

Summary: Memory overcommitted
Product: Red Hat Gluster Storage Reporter: Fred Yang <fyang13>
Component: coreAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED CANTFIX QA Contact: Anoop <annair>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: rhs-bugs
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-07 04:20:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Fred Yang 2016-03-18 21:09:03 UTC
Description of problem:
Glusterfsd appears overcommited memory

Version-Release number of selected component (if applicable):
v3.7.6 / RHEL v6.6

How reproducible:
Glusterfsd ran for ~10 days and then seeing memory overcommitted

Steps to Reproduce:
Nothing special, just run glusterfsd for ~10 days. The cluster node has 24G RAM with 20 glusterfsd processes

Actual results:
/proc/meminfo:
 MemTotal:       24414040 kB
 MemFree:         1158296 kB
 Buffers:          405876 kB 
 Cached:         18039396 kB 
 ..
 CommitLimit:    20497320 kB
 Committed_AS:   44178840 kB
 VmallocTotal:   34359738367 kB 
 VmallocUsed:      630888 kB 
 VmallocChunk:   34344910868 kB 

Expected results:


Additional info:
Ran 'echo 2 > /proc/sys/vm/drop_caches' to drop the cache, MemFree increased:
 MemTotal:       24414040 kB
 MemFree:        18648448 kB
 Buffers:          436724 kB
 Cached:           673008 kB
But Committed_AS remain high:
 CommitLimit:    20497320 kB
 Committed_AS:   44178840 kB
 VmallocTotal:   34359738367 kB
 VmallocUsed:      630900 kB
 VmallocChunk:   34344910868 kB

Comment 2 Amar Tumballi 2018-02-07 04:20:02 UTC
Thank you for your report. This bug is filed against a component for which no further new development is being undertaken.

If you find the bug relevant in the higher versions too, please feel free to reopen the bug.