Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1370654 - Auto Tune Cassandra Memory Usage Based on Limit
Auto Tune Cassandra Memory Usage Based on Limit
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Hawkular (Show other bugs)
3.3.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Matt Wringe
chunchen
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-08-26 18:37 EDT by Dan McPherson
Modified: 2017-03-08 13 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
The Cassandra instance will now automatically configure its memory based on the pods memory and cpu limits. Previously this was configured via environment variables and these can still be used to override the memory configuration if desired.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-09-27 05:46:28 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1933 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.3 Release Advisory 2016-09-27 09:24:36 EDT

  None (edit)
Description Dan McPherson 2016-08-26 18:37:11 EDT
You can accomplish this in a couple of ways:

1) Read from the cgroup value directly

CONTAINER_MEMORY_IN_BYTES=`cat /sys/fs/cgroup/memory/memory.limit_in_bytes`


or 2) Use the downward api

env:
  - name: MEM_LIMIT
    valueFrom:
      resourceFieldRef:
        resource: limits.memory

A typical approach is to tune the heap size to 60-70% of the limit value to allow room for native memory from the JVM.  You may want to leave even more space for the Cassandra file cache.
Comment 1 Matt Wringe 2016-08-29 09:34:18 EDT
"A typical approach is to tune the heap size to 60-70% of the limit value to allow room for native memory from the JVM.  You may want to leave even more space for the Cassandra file cache."

Cassandra has its own mechanism and recommended approaches for its memory configuration what we will be using: http://docs.datastax.com/en/cassandra/2.2/cassandra/operations/opsTuneJVM.html
Comment 3 Peng Li 2016-08-30 22:44:59 EDT
blocked by #1371578, will verify it once the bug is fixed.
Comment 4 Matt Wringe 2016-08-31 11:04:46 EDT
FYI: has #1371578 been closed as NOTABUG
Comment 6 Peng Li 2016-09-02 02:32:43 EDT
Verified with Image ID:		docker://sha256:c951c67e1d8a47815d4dadc03ab487009f5f51084641cddad59218030ccaa17a

After deploy metrics, the MAX_HEAP_SIZE and HEAP_NEWSIZE could be calculated correctly. In my test env, 
file cassandra-env.sh, MAX_HEAP_SIZE and HEAP_NEWSIZE is substitude to 1024M and 200M based on pod memory limits(3975499776 which is between 2 and 4G) and CPU_LIMIT(2).
Comment 8 errata-xmlrpc 2016-09-27 05:46:28 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1933

Note You need to log in before you can comment on or make changes to this bug.