Bug 1158839 - Clustered cache with FileStore (shared=false) is inconsistent after restarting one node if entries are deleted during restart
Summary: Clustered cache with FileStore (shared=false) is inconsistent after restartin...
Alias: None
Product: JBoss Data Grid 6
Classification: JBoss
Component: Infinispan
Version: 6.3.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 6.4.0
Assignee: William Burns
QA Contact: Martin Gencur
Depends On: 1176021
TreeView+ depends on / blocked
Reported: 2014-10-30 10:52 UTC by wfink
Modified: 2018-12-09 19:01 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
In Red Hat JBoss Data Grid, when a node restarts, it does not automatically purge entries from its local cache store. As a result, the Administrator starting the node must change the node configuration manually to set the cache store to be purged when the node is starting. If the configuration is not changed, the cache may be inconsistent (removed entries can appear to be present). This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
Clone Of:
: 1176021 (view as bug list)
Last Closed:
Type: Bug

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker ISPN-4908 0 Major Open Clustered cache with unshared store is inconsistent after restarting one node if entries are deleted during restart 2018-04-19 03:01:17 UTC

Description wfink 2014-10-30 10:52:59 UTC
Description of problem:

If a cache instance with a cache store is down and the cache is changed until the instance is back and join the cluster the cache can become inconsisstent.

If entries are deleted during downtime,

    the FileStore with stale object is loaded first if preload=true
    the local entries are updated with new and changed objects from the cluster
    removed entries from the cluster are not seen and therefore not deleted
    After complete sync (only) this instance will have stale objects.

From a consistence and performance perspective the FileStore should be pruned on cluster-join by default in this case

How reproducible:

Steps to Reproduce:
1. Start a cache with filestore and >1 cluster nodes
2. add entires
3. stop one instance
4. remove/add/change entries
5. re-start the instance

Actual results:
The added/changed entries are correct after sync, but the removed entries are still available on the started instance

Expected results:
Removed entries are not available on any instance

Note You need to log in before you can comment on or make changes to this bug.