Bug 1163665
| Summary: | Node can temporarily read removed data when another node joins the cluster, leaves or crashes | ||
|---|---|---|---|
| Product: | [JBoss] JBoss Data Grid 6 | Reporter: | Radim Vansa <rvansa> |
| Component: | Infinispan | Assignee: | Pedro Ruivo <pruivo> |
| Status: | CLOSED UPSTREAM | QA Contact: | Martin Gencur <mgencur> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.4.0 | CC: | chuffman, jdg-bugs, jpallich, pzapataf, ttarrant |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
In Red Hat JBoss Data Grid, the distribution of entries in the cluster changes when a node joins, leaves or crashes. During this brief period, a read on the previous node owner can return stale data. When the rebalance process is completed, further reads return up-to-date data.
This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2025-02-10 03:43:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Radim Vansa
2014-11-13 09:10:15 UTC
Pedro Ruivo <pruivo> updated the status of jira ISPN-4444 to Coding In Progress Dan Berindei <dberinde> updated the status of jira ISPN-4444 to Reopened Pedro Ruivo <pruivo> updated the status of jira ISPN-5021 to Coding In Progress I think we should split this bug in two and mark this issue as resolved this is a tricky one. I'm currently work on it but I think the solution I have in mind will solve it. The stale value can be read only for a brief period of time. Since we want rather *eventual* consistency than guarantee up-to-date reads in crash/join/leave/split-brain scenarios, this issue is not critical to JDG. Any resilience tests had trouble with https://bugzilla.redhat.com/show_bug.cgi?id=1166243 . I need to test this yet further, but it seems that partition handling is behaving as expected, since the cache is allowed to become inconsistent/provide stale data in certain time window. This BZ is related to reading stale data, not inconsistent result of writes. What BZ would you like me to verify? Described test setup is possible. This product has been discontinued or is no longer tracked in Red Hat Bugzilla. |