Description of problem: === * After deploying Cluster Metrics, hawkular-metrics is getting following error and metrics deployment is not working at all. ^[[0m^[[0m10:06:13,954 INFO [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] (metricsservice-lifecycle-thread) HAWKMETRICS200005: Metrics service started ^[[0m^[[31m11:48:47,690 ERROR [org.hawkular.metrics.api.jaxrs.util.ApiUtils] (http-/0.0.0.0:8444-10) HAWKMETRICS200010: Failed to process request: java.lang.RuntimeException: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write) at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:472) [rxjava-1.0.13.redhat-2.jar:1.0.13.redhat-2] at rx.observables.BlockingObservable.lastOrDefault(BlockingObservable.java:262) [rxjava-1.0.13.redhat-2.jar:1.0.13.redhat-2] at org.hawkular.metrics.api.jaxrs.handler.CounterHandler.addData(CounterHandler.java:172) [classes:] ... Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write) ... Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the write) Version-Release number of selected component (if applicable): === * registry.access.redhat.com/openshift3/metrics-hawkular-metrics:3.2.1 How reproducible: * Nothing special. it looks like happend After 30min from deploying the pod. Actual results: * Please see above error and attachments. Expected results: * There are no error.
I am going to mark this as for upcomingRelease. The only thing we can really ask users to do in the case when the disk speed is inadequate for writes to keep up is to: 1) have the user run another Cassandra instance. This should spread out the writes across two different volume which should help to alleviate a single bottleneck 2) have an admin change the volume for commit logs. This is more of a manual affair, but they can configure the pod to write commit logs to a host volume or pod volume instead of writing them to the main data volume. The origin docs are available here https://github.com/openshift/origin-metrics/blob/master/docs/cassandra_advance.adoc#moving-the-commit-logs-to-another-volume and should eventually soon get into the 3.4 docs as well.
The above solution has been verified on Origin, I'll change the status once it's test in OCP 3.4
Test 'move commit log to other volume(not cassandra PV)' has passed in OCP 3.4 with Metrics 3.4.0, can I set status to Verified now? Thanks. [root@host-8-174-32 ~]# openshift version openshift v3.4.0.23+24b1a58 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0
If it should be moved to verified, please do so.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066
Please let me know if if the case is a match for this bug. Here are the tpstats: # oc rsh hawkular-cassandra-1-jo4rm sh-4.2$ nodetool tpstats Picked up JAVA_TOOL_OPTIONS: -Duser.home=/home/jboss -Duser.name=jboss Pool Name Active Pending Completed Blocked All time blocked MutationStage 0 0 1957019343 0 0 ReadStage 0 0 537200 0 0 RequestResponseStage 0 0 922180704 0 0 ReadRepairStage 0 0 0 0 0 CounterMutationStage 0 0 0 0 0 HintedHandoff 0 0 8 0 0 MiscStage 0 0 0 0 0 CompactionExecutor 0 0 676888 0 0 MemtableReclaimMemory 0 0 16868 0 0 PendingRangeCalculator 0 0 4 0 0 GossipStage 0 0 3268001 0 0 MigrationStage 0 0 0 0 0 MemtablePostFlush 0 0 35074 0 0 ValidationExecutor 0 0 0 0 0 Sampler 0 0 0 0 0 MemtableFlushWriter 0 0 16868 0 0 InternalResponseStage 0 0 681 0 0 AntiEntropyStage 0 0 0 0 0 CacheCleanupExecutor 0 0 0 0 0 Native-Transport-Requests 0 0 3128841115 0 7299490 Message type Dropped READ 0 RANGE_SLICE 0 _TRACE 0 MUTATION 719 COUNTER_MUTATION 0 REQUEST_RESPONSE 0 PAGED_RANGE 0 READ_REPAIR 0 sh-4.2$