Bug 1376674 - hawkular-metrics is getting error due to Cassandra timeout during write query
Summary: hawkular-metrics is getting error due to Cassandra timeout during write query
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Hawkular
Version: 3.2.1
Hardware: All
OS: All
unspecified
urgent
Target Milestone: ---
: ---
Assignee: John Sanda
QA Contact: Peng Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-16 07:02 UTC by Kenjiro Nakayama
Modified: 2021-06-10 11:32 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-18 12:53:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0066 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.4 RPM Release Advisory 2017-01-18 17:23:26 UTC

Description Kenjiro Nakayama 2016-09-16 07:02:54 UTC
Description of problem:
===

* After deploying Cluster Metrics, hawkular-metrics is getting following error and metrics deployment is not working at all.

  ^[[0m^[[0m10:06:13,954 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] (metricsservice-lifecycle-thread) HAWKMETRICS200005: Metrics service started
  ^[[0m^[[31m11:48:47,690 ERROR [org.hawkular.metrics.api.jaxrs.util.ApiUtils] (http-/0.0.0.0:8444-10) HAWKMETRICS200010: Failed to process request: java.lang.RuntimeException: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)
          at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:472) [rxjava-1.0.13.redhat-2.jar:1.0.13.redhat-2]
          at rx.observables.BlockingObservable.lastOrDefault(BlockingObservable.java:262) [rxjava-1.0.13.redhat-2.jar:1.0.13.redhat-2]
          at org.hawkular.metrics.api.jaxrs.handler.CounterHandler.addData(CounterHandler.java:172) [classes:]

     ...

  Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)

     ...

  Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write)

Version-Release number of selected component (if applicable):
===

* registry.access.redhat.com/openshift3/metrics-hawkular-metrics:3.2.1

How reproducible:

* Nothing special. it looks like happend After 30min from deploying the pod.

Actual results:

* Please see above error and attachments.

Expected results:

* There are no error.

Comment 25 Matt Wringe 2016-10-31 21:09:12 UTC
I am going to mark this as for upcomingRelease.

The only thing we can really ask users to do in the case when the disk speed is inadequate for writes to keep up is to:

1) have the user run another Cassandra instance. This should spread out the writes across two different volume which should help to alleviate a single bottleneck

2) have an admin change the volume for commit logs. This is more of a manual affair, but they can configure the pod to write commit logs to a host volume or pod volume instead of writing them to the main data volume. The origin docs are available here https://github.com/openshift/origin-metrics/blob/master/docs/cassandra_advance.adoc#moving-the-commit-logs-to-another-volume and should eventually soon get into the 3.4 docs as well.

Comment 26 Peng Li 2016-11-01 08:28:35 UTC
The above solution has been verified on Origin, I'll change the status once it's test in OCP 3.4

Comment 28 Peng Li 2016-11-08 08:37:16 UTC
Test 'move commit log to other volume(not cassandra PV)' has passed in OCP 3.4 with Metrics 3.4.0, can I set status to Verified now? Thanks.

[root@host-8-174-32 ~]# openshift version
openshift v3.4.0.23+24b1a58
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

Comment 29 Matt Wringe 2016-11-08 15:20:32 UTC
If it should be moved to verified, please do so.

Comment 31 errata-xmlrpc 2017-01-18 12:53:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0066

Comment 32 Richard Foyle 2018-04-20 19:03:44 UTC
Please let me know if if the case is a match for this bug. Here are the tpstats:


# oc rsh hawkular-cassandra-1-jo4rm
sh-4.2$ nodetool tpstats
Picked up JAVA_TOOL_OPTIONS: -Duser.home=/home/jboss -Duser.name=jboss
Pool Name                    Active   Pending      Completed   Blocked  All time blocked
MutationStage                     0         0     1957019343         0                 0
ReadStage                         0         0         537200         0                 0
RequestResponseStage              0         0      922180704         0                 0
ReadRepairStage                   0         0              0         0                 0
CounterMutationStage              0         0              0         0                 0
HintedHandoff                     0         0              8         0                 0
MiscStage                         0         0              0         0                 0
CompactionExecutor                0         0         676888         0                 0
MemtableReclaimMemory             0         0          16868         0                 0
PendingRangeCalculator            0         0              4         0                 0
GossipStage                       0         0        3268001         0                 0
MigrationStage                    0         0              0         0                 0
MemtablePostFlush                 0         0          35074         0                 0
ValidationExecutor                0         0              0         0                 0
Sampler                           0         0              0         0                 0
MemtableFlushWriter               0         0          16868         0                 0
InternalResponseStage             0         0            681         0                 0
AntiEntropyStage                  0         0              0         0                 0
CacheCleanupExecutor              0         0              0         0                 0
Native-Transport-Requests         0         0     3128841115         0           7299490

Message type           Dropped
READ                         0
RANGE_SLICE                  0
_TRACE                       0
MUTATION                   719
COUNTER_MUTATION             0
REQUEST_RESPONSE             0
PAGED_RANGE                  0
READ_REPAIR                  0
sh-4.2$


Note You need to log in before you can comment on or make changes to this bug.