resilience 4-3-4 REPL mode for JDG 6.1.0.ER9: https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPORTS-RESILIENCE/job/edg-60-resilience-repl-4-3/31/artifact/report/stats-throughput.png
Michal Linhard <mlinhard> made a comment on jira ISPN-2738 working on trace logs
Michal Linhard <mlinhard> made a comment on jira ISPN-2738 client logs: http://www.qa.jboss.com/~mlinhard/test_results/driver0-ISPN-2738.zip server logs: https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-QE/job/edg-60-experiments-mlinhard/268/artifact/report/serverlogs.zip
Michal Linhard <mlinhard> made a comment on jira ISPN-2738 [~galder.zamarreno]
Michal Linhard <mlinhard> made a comment on jira ISPN-2738 [~galder.zamarreno] or [~dan.berindei] could you please have a look at this ? I think this might be connected with solution to ISPN-2632 we've talked about.
Galder Zamarreño <galder.zamarreno> updated the status of jira ISPN-2738 to Coding In Progress
Galder Zamarreño <galder.zamarreno> made a comment on jira ISPN-2738 The problem does indeed look related to ISPN-2632 and I think it's linked to removal of coordination between the address cache and the topology id update. The problem seems to be that the Hot Rod server sends a new topology id before the cache has been updated, so when a new added, it says: here's the new topology ID but the cache has not yet been updated. The client now has a new id but the members are the same. When the cache is eventually updated with the new node, the topology ID is not increased, so clients will never talk to it. Here's a snippet from node01.log that proofs what I say: {code}12:43:03,137 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-119) Decoded header HotRodHeader{op=GetRequest, version=12, messageId=1974, cacheName=testCache, flag=0, clientIntelligence=3, topologyId=8} ... 12:43:03,229 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-107) Decoded header HotRodHeader{op=GetRequest, version=12, messageId=2626, cacheName=testCache, flag=0, clientIntelligence=3, topologyId=9} ... 12:43:03,753 TRACE [org.infinispan.container.entries.ReadCommittedEntry] (OOB-197,null) Updating entry (key=node02/default removed=false valid=true changed=true created=true loaded=false value=172.18.1.3:11222] ... node01.log:86873:12:43:03,780 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-119) Decoded header HotRodHeader{op=PutRequest, version=12, messageId=1992, cacheName=testCache, flag=6, clientIntelligence=3, topologyId=9}{code} @Dan, this is precisely the reason why the interceptor in HotRodServer was created. To coordinate and make sure that the new topology ID is not sent before the cache has been updated. This is crucial is part of the code I added to deal with resilience testing in previous testing round.
Dan Berindei <dberinde> made a comment on jira ISPN-2738 In my fix for ISPN-2632 I replaced the interceptor with a check to not send the topology update unless all the consistent hash members also exist in the address cache. Unfortunately I only added the check for distributed caches (see AbstraceEncoder1x/AbstractTopologyAwareEncoder1x.writeHashTopologyHeader). The fix is to add the same check on all the code paths that write a topology update.
Dan Berindei <dberinde> updated the status of jira ISPN-2738 to Open
Dan Berindei <dberinde> updated the status of jira ISPN-2738 to Coding In Progress
Dan Berindei <dberinde> made a comment on jira ISPN-2738 Skip the topology update if the cache members aren't all in the address cache. Do the check in AbstractEncoder1x.generateTopologyResponse, so that it works for all topology types (i.e. also for replicated caches). I added a new replicated-mode test, but it still doesn't cover this case.
Verified for JDG 6.1.0.ER10
This product has been discontinued or is no longer tracked in Red Hat Bugzilla.