Bug 1158839
| Summary: | Clustered cache with FileStore (shared=false) is inconsistent after restarting one node if entries are deleted during restart | |||
|---|---|---|---|---|
| Product: | [JBoss] JBoss Data Grid 6 | Reporter: | wfink | |
| Component: | Infinispan | Assignee: | William Burns <wburns> | |
| Status: | CLOSED UPSTREAM | QA Contact: | Martin Gencur <mgencur> | |
| Severity: | unspecified | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 6.3.1 | CC: | chuffman, jdg-bugs, jpallich, ttarrant | |
| Target Milestone: | --- | |||
| Target Release: | 6.4.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | Known Issue | ||
| Doc Text: |
In Red Hat JBoss Data Grid, when a node restarts, it does not automatically purge entries from its local cache store. As a result, the Administrator starting the node must change the node configuration manually to set the cache store to be purged when the node is starting. If the configuration is not changed, the cache may be inconsistent (removed entries can appear to be present).
This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1176021 (view as bug list) | Environment: | ||
| Last Closed: | 2025-02-10 03:43:26 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1176021 | |||
| Bug Blocks: | ||||
This product has been discontinued or is no longer tracked in Red Hat Bugzilla. |
Description of problem: If a cache instance with a cache store is down and the cache is changed until the instance is back and join the cluster the cache can become inconsisstent. If entries are deleted during downtime, the FileStore with stale object is loaded first if preload=true the local entries are updated with new and changed objects from the cluster removed entries from the cluster are not seen and therefore not deleted After complete sync (only) this instance will have stale objects. From a consistence and performance perspective the FileStore should be pruned on cluster-join by default in this case How reproducible: Steps to Reproduce: 1. Start a cache with filestore and >1 cluster nodes 2. add entires 3. stop one instance 4. remove/add/change entries 5. re-start the instance Actual results: The added/changed entries are correct after sync, but the removed entries are still available on the started instance Expected results: Removed entries are not available on any instance