You can accomplish this in a couple of ways:
1) Read from the cgroup value directly
or 2) Use the downward api
- name: MEM_LIMIT
A typical approach is to tune the heap size to 60-70% of the limit value to allow room for native memory from the JVM. You may want to leave even more space for the Cassandra file cache.
"A typical approach is to tune the heap size to 60-70% of the limit value to allow room for native memory from the JVM. You may want to leave even more space for the Cassandra file cache."
Cassandra has its own mechanism and recommended approaches for its memory configuration what we will be using: http://docs.datastax.com/en/cassandra/2.2/cassandra/operations/opsTuneJVM.html
blocked by #1371578, will verify it once the bug is fixed.
FYI: has #1371578 been closed as NOTABUG
Verified with Image ID: docker://sha256:c951c67e1d8a47815d4dadc03ab487009f5f51084641cddad59218030ccaa17a
After deploy metrics, the MAX_HEAP_SIZE and HEAP_NEWSIZE could be calculated correctly. In my test env,
file cassandra-env.sh, MAX_HEAP_SIZE and HEAP_NEWSIZE is substitude to 1024M and 200M based on pod memory limits(3975499776 which is between 2 and 4G) and CPU_LIMIT(2).
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.