Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1539191 - Upgrade to Cassandra 3.0.15
Upgrade to Cassandra 3.0.15
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Hawkular (Show other bugs)
3.7.0
Unspecified Unspecified
unspecified Severity high
: ---
: 3.7.z
Assigned To: Ruben Vargas Palma
Junqi Zhao
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2018-01-26 16:16 EST by John Sanda
Modified: 2018-04-05 05:36 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-04-05 05:36:18 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0636 None None None 2018-04-05 05:36 EDT

  None (edit)
Description John Sanda 2018-01-26 16:16:32 EST
Description of problem:
OCP 3.7 uses Cassandra 3.0.14 which has a critical bug that we have hit - https://issues.apache.org/jira/browse/CASSANDRA-13696. The effect of this bug is that Cassandra keeps running but shuts down most services such that it stops accepting client requests and stops gossiping with other Cassandra nodes. The rest of the C* cluster will mark the effected C* node down. Any requests targeted for the node will result in UnavilableExceptions. 

The nodetool status command still reports UN, i.e., up and normal; consequently, the liveness probe does not fail, and the C* cluster remains in a bad state. Running nodetool status from another C* pod in the cluster will however report the effected C* node as down.

There is a temporary work around. First, delete anything in the /cassandra_data/data/system/hints-<some_hash>/ directory. Then restart the Cassandra pod. Restarting the pod is a bit of a brute force approach. There is another option that is slightly more involved but will get Cassandra back running much more quickly. Here are the steps:

1) oc rsh <cassandra_pod>
2) nodetool enablebackup
3) nodetool enablegossip
4) nodetool enablebinary

Within a few seconds the rest of the C* cluster should start reporting the effected C* node as up. Hinted handoff will still be disabled which is fine given the impact of CASSANDRA-13696. Note though that a pod restart will re-enable hinted handoff. It can be disabled with `nodetool disablehandoff`.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 2 Junqi Zhao 2018-02-21 21:15:17 EST
Verified with metrics-cassandra/images/v3.7.31-1, CASSANDRA_VERSION is 3.0.15 now, and passed smoke testing. Set it to VERIFIED

# openshift version
openshift v3.7.31
kubernetes v1.7.6+a08f5eeb62
etcd 3.2.8

# oc rsh ${hawkular-cassandra_pod}
sh-4.2$ env | grep -i version
JBOSS_IMAGE_VERSION=1.3
JAVA_VERSION=1.8.0
CASSANDRA_VERSION=3.0.15.redhat-1
Comment 6 errata-xmlrpc 2018-04-05 05:36:18 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0636

Note You need to log in before you can comment on or make changes to this bug.