Bug 1511631 - Cassandra readiness probe can incorrectly fail in multi node setup
Summary: Cassandra readiness probe can incorrectly fail in multi node setup
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Hawkular
Version: 3.3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.3.1
Assignee: Ruben Vargas Palma
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On: 1494673
Blocks: 1496228 1511627 1511628 1511629
TreeView+ depends on / blocked
 
Reported: 2017-11-09 18:38 UTC by Matt Wringe
Modified: 2018-04-18 07:02 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1494673
Environment:
Last Closed: 2018-04-18 07:00:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1134 0 None None None 2018-04-18 07:02:00 UTC

Description Matt Wringe 2017-11-09 18:38:44 UTC
+++ This bug was initially created as a clone of Bug #1494673 +++

Our Cassandra readiness probe will parse the output of 'nodetool status' to determine if the Cassandra instance is in the 'up' and 'normal' state.

Our string parsing of the output can have an issue in certain situations. If the string value of the current host's ip address is contained within the ip of another node in the cluster, then we will try and parse two lines of the output instead of just one.

For instance, consider the case where we have two nodes in our Cassandra cluster where their ip addresses are '172.17.0.3' and '172.17.0.3' ('72.17.0.3' and '172.17.0.3' would also cause a problem as well).

How we are parsing this output, our script would incorrectly try and handle both entries from 'nodetool status' instead of just the one.

This will cause the readiness probe to get unexpected information and fail.

If the pod is brought down and restarted, it should be granted a new ip address which should not conflict with the second ip address anymore and then be able to continue.

--- Additional comment from Matt Wringe on 2017-09-22 15:16:20 EDT ---

Simple PR which fixes this issue by checking for whitespace before and after the ip address, thus preventing the script from considering the ip address the same: https://github.com/openshift/origin-metrics/pull/380

--- Additional comment from Junqi Zhao on 2017-09-30 05:29:33 EDT ---

@Matt
Which image version contain the fix?
Do we still need to verify it failed with previous versions?

--- Additional comment from Junqi Zhao on 2017-09-30 06:07:23 EDT ---

Tested with currently latest image:metrics-cassandra-v3.7.0-0.135.0.0, it returned "Cassandra is in the up and normal state"

--- Additional comment from Matt Wringe on 2017-10-06 13:24:36 EDT ---

(In reply to Junqi Zhao from comment #2)
> @Matt
> Which image version contain the fix?

The latest 3.7 release should have this fixed.

> Do we still need to verify it failed with previous versions?

Its the exact same change as https://bugzilla.redhat.com/show_bug.cgi?id=1496228 which I believe was verified there.

--- Additional comment from Junqi Zhao on 2017-10-08 20:38:48 EDT ---

Closed based on Comment 3 and Comment 4

Comment 2 Junqi Zhao 2018-01-17 09:00:23 UTC
Tested with metrics-cassandra-3.3.1.27, test steps followed https://bugzilla.redhat.com/show_bug.cgi?id=1496228#c5,  it returned "Cassandra is in the up and normal state"

sh-4.2$ source /opt/apache-cassandra/bin/cassandra-docker-ready.sh
Cassandra is in the up and normal state. It is now ready.

Images:
metrics-deployer/images/v3.3-1002
metrics-cassandra/images/3.3.1-27
metrics-heapster/images/3.3.1-24
metrics-hawkular-metrics/images/3.3.1-24

env:
# openshift version
openshift v3.3.1.46.34
kubernetes v1.3.0+52492b4
etcd 2.3.0+git

Set this defect to VERIFIED

Comment 5 errata-xmlrpc 2018-04-18 07:00:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1134


Note You need to log in before you can comment on or make changes to this bug.