Bug 1370654 - Auto Tune Cassandra Memory Usage Based on Limit
Summary: Auto Tune Cassandra Memory Usage Based on Limit
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Hawkular
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Matt Wringe
QA Contact: chunchen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-26 22:37 UTC by Dan McPherson
Modified: 2017-03-08 18:26 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
The Cassandra instance will now automatically configure its memory based on the pods memory and cpu limits. Previously this was configured via environment variables and these can still be used to override the memory configuration if desired.
Clone Of:
Environment:
Last Closed: 2016-09-27 09:46:28 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1933 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.3 Release Advisory 2016-09-27 13:24:36 UTC

Description Dan McPherson 2016-08-26 22:37:11 UTC
You can accomplish this in a couple of ways:

1) Read from the cgroup value directly

CONTAINER_MEMORY_IN_BYTES=`cat /sys/fs/cgroup/memory/memory.limit_in_bytes`


or 2) Use the downward api

env:
  - name: MEM_LIMIT
    valueFrom:
      resourceFieldRef:
        resource: limits.memory

A typical approach is to tune the heap size to 60-70% of the limit value to allow room for native memory from the JVM.  You may want to leave even more space for the Cassandra file cache.

Comment 1 Matt Wringe 2016-08-29 13:34:18 UTC
"A typical approach is to tune the heap size to 60-70% of the limit value to allow room for native memory from the JVM.  You may want to leave even more space for the Cassandra file cache."

Cassandra has its own mechanism and recommended approaches for its memory configuration what we will be using: http://docs.datastax.com/en/cassandra/2.2/cassandra/operations/opsTuneJVM.html

Comment 3 Peng Li 2016-08-31 02:44:59 UTC
blocked by #1371578, will verify it once the bug is fixed.

Comment 4 Matt Wringe 2016-08-31 15:04:46 UTC
FYI: has #1371578 been closed as NOTABUG

Comment 6 Peng Li 2016-09-02 06:32:43 UTC
Verified with Image ID:		docker://sha256:c951c67e1d8a47815d4dadc03ab487009f5f51084641cddad59218030ccaa17a

After deploy metrics, the MAX_HEAP_SIZE and HEAP_NEWSIZE could be calculated correctly. In my test env, 
file cassandra-env.sh, MAX_HEAP_SIZE and HEAP_NEWSIZE is substitude to 1024M and 200M based on pod memory limits(3975499776 which is between 2 and 4G) and CPU_LIMIT(2).

Comment 8 errata-xmlrpc 2016-09-27 09:46:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1933


Note You need to log in before you can comment on or make changes to this bug.