Description of problem: One of our 3 infrastructure nodes was scheduled with all three hawkular-cassandra pods which had the consequence of consuming the majority of the memory on the box and effectively disabling logging. Version-Release number of selected component (if applicable): v3.11.0-0.21.0 Actual results: See attachment for problematic pod distribution. Expected results: pods should distribute evenly across HA infra nodes. Logging uses podAntiAffinity in the pod spec to achieve this.
How are you checking memory usage? Seeing high memory usage is normal and expected for something like Cassandra which relies heavily on mmap for I/O. Most writes in Cassandra go to the operating system file cache, and Cassandra defers to the OS to decide when something should be written out to disk. I explained this in a little more detail at https://bugzilla.redhat.com/show_bug.cgi?id=1596327#c11.