Bug 1119379 - Do not run scheduled weekly anti-entropy repair on storage cluster
Summary: Do not run scheduled weekly anti-entropy repair on storage cluster
Keywords:
Status: CLOSED EOL
Alias: None
Product: JBoss Operations Network
Classification: JBoss
Component: Core Server, Storage Node
Version: JON 3.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: JON 4.0.0
Assignee: RHQ Project Maintainer
QA Contact: Mike Foley
URL:
Whiteboard:
Depends On: 1119378
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-14 16:05 UTC by John Sanda
Modified: 2019-07-29 14:56 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1119378
Environment:
Last Closed: 2019-07-29 14:56:15 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description John Sanda 2014-07-14 16:05:01 UTC
+++ This bug was initially created as a clone of Bug #1119378 +++

Description of problem:
From email I sent to rhq-devel list,

"Repair is process in Cassandra in which data is made consistent across replicas. There are two kinds - read repair and anti-entropy repair. The former happens automatically in the background on queries. The latter is done via JMX. Although nodes can remain operational while anti-entropy repair, it is very resource intensive and can take a long time to run. It can easily be on the order of hours. The Cassandra docs recommend running regularly, scheduled anti-entropy within gc_grace_seconds, which is the time to wait before Cassandra garbage collections tombstones (i.e., deletion markers). The reason for running it within gc_grace_seconds is to ensure deletes get propagated and to prevent them from being undone. gc_grace_seconds is configured per keyspace and defaults to 10 days. We set gc_grace_seconds to 8 days, and we run anti-entropy repair weekly in a Quartz job named StorageClusterReadRepairJob.

After some investigation I am now of the opinion that we do not need to run a scheduled repair job. As long as replicas are up, data will be consistent between them. If we have cluster where the nodes never go down, then there is no need to run anti-entropy repair with respect to data consistency. Of course nodes do go down. Cassandra has another mechanism called hinted handoff that comes into play. When the target replica is down, the coordinator node (the one receiving the request), stores a hint of the mutation that is intended for that replica. When the replica comes back up, it will receive the hints, making it consistent with other replicas.

There is a maximum amount of time a node can be down and other nodes will store hints. The is defined by the max_hint_window_in_ms property in cassandra.yaml, and it defaults to 3 hours. If a node is down longer than that, then other nodes assume the down node is dead unless and until it comes back up. So if we do not run scheduled repair and if a node is down for more than max_hint_window_in_ms, then need to run a full repair on the node when it comes back up to account for any dropped hints.

As for deletes, I do not think we need to be concerned for a couple reasons. First, we are dealing with append-only data, where each column is only ever written once and never updated. Secondly, we write all data with a TTL. In the event some metric data was deleted on one replica, and still alive on the other, we know that it has the TTL set and will expire; therefore, we do not need to worry about deletes being undone."

I am not suggesting we will never have a need to run anti-entropy repair. In fact, we do run anti-entropy repair whenever the replication_factor changes when nodes are added to or removed from the cluster. And when we store other data in Cassandra, we may very well need to run scheduled anti-entropy on it.

There are few things then that need to be done for this BZ.

1) Get rid of the StorageClusterReadRepairJob Quartz job

2) Add support for updating max_hint_window_in_ms as a cluster-wide maintenance task

3) Run anti-entropy repair on a node that has come back up and has been down longer than max_hint_window_in_ms

4) Reduce gc_grace_seconds

5) Make gc_grace_seconds configurable (maybe?)

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 John Sanda 2014-08-29 12:30:40 UTC
Bumping the target release due to time constraints.

Comment 2 Filip Brychta 2019-07-29 14:56:15 UTC
JBoss ON is coming to the end of its product life cycle. For more information regarding this transition, see https://access.redhat.com/articles/3827121.
This bug report/request is being closed. If you feel this issue should not be closed or requires further review, please create a new bug report against the latest supported JBoss ON 3.3 version.


Note You need to log in before you can comment on or make changes to this bug.