Bug 1198452 - LIRS eviction strategy fixes
Summary: LIRS eviction strategy fixes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: JBoss Data Grid 6
Classification: JBoss
Component: Infinispan
Version: 6.4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ER1
: 6.4.1
Assignee: Tristan Tarrant
QA Contact: Martin Gencur
URL:
Whiteboard:
Depends On: 1197847
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-04 07:55 UTC by Sebastian Łaskawiec
Modified: 2015-04-02 12:46 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
In previous versions of Red Hat JBoss Data Grid, LIRS eviction can cause some elements to be evicted prematurely, resulting in data not being passivated to the cache store. The eviction policies have been updated, and the data container now ensures atomicity when passivating and activating entries to address this issue.
Clone Of:
Environment:
Last Closed: 2015-04-02 12:13:53 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker ISPN-3023 0 Minor Resolved Re-implement BoundedConcurrentHashMap using CHMv8 designs 2017-11-27 11:23:40 UTC

Description Sebastian Łaskawiec 2015-03-04 07:55:58 UTC
Implementation of https://bugzilla.redhat.com/show_bug.cgi?id=1197847 for 6.4.1

Comment 1 Sebastian Łaskawiec 2015-03-04 07:56:54 UTC
PR: https://github.com/infinispan/jdg/pull/536

Comment 3 Martin Gencur 2015-03-04 12:25:28 UTC
Hi, I don't know whether it is a good idea to put a fix with 10000+ lines of code, which re-implemenents the whole logic of eviction, into a micro release.

Comment 4 Vojtech Juranek 2015-03-13 08:59:09 UTC
Hi,
behaviour of BoundedEquivalentConcurrentHashMapV8StressTest seems to be little bit weird. For 1M of entries run smoothly, but for 10M with eviction strategy on fills the map forever (forever == 45min on my machine and then I gave up). I.e. for 10M of entries and more BoundedEquivalentConcurrentHashMapV8 seems unusable. Bellow is part of the stack trace.

Will, could you please comment on that? Is this expected or any hint where could be the problem? (I had enough memory available, the process consumed 2 cores)

Thanks



Thread 4182: (state = BLOCKED)
 - org.infinispan.commons.util.concurrent.jdk8backported.BoundedEquivalentConcurrentHashMapV8$LRUEvictionPolicy.createNewEntry(java.lang.Object, int, org.infinispan.commons.util.concurrent.jdk8backported.BoundedEquivalentConcurrentHashMapV8$Node, java.lang.Object, org.infinispan.commons.util.concurrent.jdk8backported.BoundedEquivalentConcurrentHashMapV8$EvictionEntry) @bci=28, line=519 (Compiled frame)
 - org.infinispan.commons.util.concurrent.jdk8backported.BoundedEquivalentConcurrentHashMapV8.putVal(java.lang.Object, java.lang.Object, boolean) @bci=94, line=2406 (Compiled frame)
 - org.infinispan.commons.util.concurrent.jdk8backported.BoundedEquivalentConcurrentHashMapV8.put(java.lang.Object, java.lang.Object) @bci=4, line=2393 (Compiled frame)
 - org.infinispan.commons.util.concurrent.jdk8backported.BoundedEquivalentConcurrentHashMapV8StressTest.testRemovePerformance(int, java.util.Map, java.lang.String) @bci=74, line=35 (Compiled frame)
 - org.infinispan.commons.util.concurrent.jdk8backported.BoundedEquivalentConcurrentHashMapV8StressTest.testLRURemovePerformance() @bci=33, line=82 (Interpreted frame)

Comment 5 William Burns 2015-03-13 12:40:42 UTC
Vojtech,

Just to clarify I am guessing you meant LRU eviction strategy (looking at the stack trace it seems to be) ?   I am guessing this also affected LIRS ?

Also do you happen to have the entire JVM stack trace ?  The snippet here seems to be fine from what I can see.

Comment 6 William Burns 2015-03-13 12:49:27 UTC
Also it seems you are using BoundedEquivalentConcurrentHashMapV8StressTest to test this?  Is that correct ?  I have run it quite a few times with 10M and haven't had an issue yet, the stack trace should help immensely.

Comment 7 Vojtech Juranek 2015-03-13 14:05:36 UTC
Sorry for the false alarm, it was caused by GC and misconfiguration of my JAVA heap (I setup bigger heap for maven itself instead of surefire forked process). Now everything runs smooth and new implementation is noticeably faster then the old one.


Note You need to log in before you can comment on or make changes to this bug.