Bug 1370654

Summary: Auto Tune Cassandra Memory Usage Based on Limit
Product: OpenShift Container Platform Reporter: Dan McPherson <dmcphers>
Component: HawkularAssignee: Matt Wringe <mwringe>
Status: CLOSED ERRATA QA Contact: chunchen <chunchen>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.3.0CC: aos-bugs, penli, tdawson, twiest, whearn, wsun
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
The Cassandra instance will now automatically configure its memory based on the pods memory and cpu limits. Previously this was configured via environment variables and these can still be used to override the memory configuration if desired.
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-27 09:46:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Dan McPherson 2016-08-26 22:37:11 UTC
You can accomplish this in a couple of ways:

1) Read from the cgroup value directly

CONTAINER_MEMORY_IN_BYTES=`cat /sys/fs/cgroup/memory/memory.limit_in_bytes`


or 2) Use the downward api

env:
  - name: MEM_LIMIT
    valueFrom:
      resourceFieldRef:
        resource: limits.memory

A typical approach is to tune the heap size to 60-70% of the limit value to allow room for native memory from the JVM.  You may want to leave even more space for the Cassandra file cache.

Comment 1 Matt Wringe 2016-08-29 13:34:18 UTC
"A typical approach is to tune the heap size to 60-70% of the limit value to allow room for native memory from the JVM.  You may want to leave even more space for the Cassandra file cache."

Cassandra has its own mechanism and recommended approaches for its memory configuration what we will be using: http://docs.datastax.com/en/cassandra/2.2/cassandra/operations/opsTuneJVM.html

Comment 3 Peng Li 2016-08-31 02:44:59 UTC
blocked by #1371578, will verify it once the bug is fixed.

Comment 4 Matt Wringe 2016-08-31 15:04:46 UTC
FYI: has #1371578 been closed as NOTABUG

Comment 6 Peng Li 2016-09-02 06:32:43 UTC
Verified with Image ID:		docker://sha256:c951c67e1d8a47815d4dadc03ab487009f5f51084641cddad59218030ccaa17a

After deploy metrics, the MAX_HEAP_SIZE and HEAP_NEWSIZE could be calculated correctly. In my test env, 
file cassandra-env.sh, MAX_HEAP_SIZE and HEAP_NEWSIZE is substitude to 1024M and 200M based on pod memory limits(3975499776 which is between 2 and 4G) and CPU_LIMIT(2).

Comment 8 errata-xmlrpc 2016-09-27 09:46:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1933