EAP 6.1.0.ER2 DIST cache, at the end of the test, when the whole cluster was shutting down, we saw https://bugzilla.redhat.com/show_bug.cgi?id=919397. This exception followed: 13:34:02,053 WARN [org.infinispan.topology.CacheTopologyControlCommand] (OOB-8,shared=udp) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=default-host/clusterbench, type=REBALANCE_START, sender=perf19/web, joinInfo=null, topologyId=35, currentCH=DefaultConsistentHash{numSegments=80, numOwners=2, members=[perf20/web, perf21/web], owners={0: 0, 1: 1, 2: 0, 3: 0, 4: 1, 5: 0, 6: 0, 7: 1, 8: 0, 9: 0, 10: 0, 11: 1, 12: 0, 13: 0, 14: 1, 15: 0, 16: 0, 17: 1, 18: 0, 19: 0, 20: 1, 21: 0, 22: 1, 23: 0, 24: 0, 25: 0 1, 26: 0, 27: 0, 28: 0 1, 29: 0, 30: 0, 31: 0 1, 32: 0, 33: 1 0, 34: 1 0, 35: 0 1, 36: 0 1, 37: 1 0, 38: 1 0, 39: 1 0, 40: 1, 41: 1, 42: 0, 43: 0, 44: 0, 45: 0, 46: 0, 47: 1, 48: 1, 49: 1 0, 50: 1, 51: 1, 52: 0 1, 53: 0 1, 54: 0 1, 55: 0 1, 56: 0 1, 57: 0 1, 58: 0 1, 59: 1, 60: 0 1, 61: 1 0, 62: 1 0, 63: 1, 64: 1, 65: 1, 66: 1, 67: 1, 68: 1, 69: 1, 70: 1, 71: 1, 72: 1, 73: 1 0, 74: 1 0, 75: 1 0, 76: 1 0, 77: 1 0, 78: 1, 79: 1}, pendingCH=DefaultConsistentHash{numSegments=80, numOwners=2, members=[perf20/web, perf21/web], owners={0: 0 1, 1: 1 0, 2: 0 1, 3: 0 1, 4: 1 0, 5: 0 1, 6: 0 1, 7: 1 0, 8: 0 1, 9: 0 1, 10: 0 1, 11: 1 0, 12: 0 1, 13: 0 1, 14: 1 0, 15: 0 1, 16: 0 1, 17: 1 0, 18: 0 1, 19: 0 1, 20: 1 0, 21: 0 1, 22: 1 0, 23: 0 1, 24: 0 1, 25: 0 1, 26: 0 1, 27: 0 1, 28: 0 1, 29: 0 1, 30: 0 1, 31: 0 1, 32: 0 1, 33: 1 0, 34: 1 0, 35: 0 1, 36: 0 1, 37: 1 0, 38: 1 0, 39: 1 0, 40: 1 0, 41: 1 0, 42: 0 1, 43: 0 1, 44: 0 1, 45: 0 1, 46: 0 1, 47: 1 0, 48: 1 0, 49: 1 0, 50: 1 0, 51: 1 0, 52: 0 1, 53: 0 1, 54: 0 1, 55: 0 1, 56: 0 1, 57: 0 1, 58: 0 1, 59: 1 0, 60: 0 1, 61: 1 0, 62: 1 0, 63: 1 0, 64: 1 0, 65: 1 0, 66: 1 0, 67: 1 0, 68: 1 0, 69: 1 0, 70: 1 0, 71: 1 0, 72: 1 0, 73: 1 0, 74: 1 0, 75: 1 0, 76: 1 0, 77: 1 0, 78: 1 0, 79: 1 0}, throwable=null, viewId=12}: java.lang.IllegalStateException: Cannot set a topology id (35) that is lower than the current one (37) at org.infinispan.statetransfer.StateTransferLockImpl.notifyTransactionDataReceived(StateTransferLockImpl.java:74) at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:371) at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:194) at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:60) at org.infinispan.statetransfer.StateTransferManagerImpl$1.rebalance(StateTransferManagerImpl.java:125) at org.infinispan.topology.LocalTopologyManagerImpl.handleRebalance(LocalTopologyManagerImpl.java:230) at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:168) at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:137) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220) at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484) at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391) at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249) at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598) at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) at org.jgroups.JChannel.up(JChannel.java:707) at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020) at org.jgroups.protocols.RSVP.up(RSVP.java:172) at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) at org.jgroups.protocols.FlowControl.up(FlowControl.java:400) at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896) at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245) at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:453) at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:721) at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:574) at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:143) at org.jgroups.protocols.FD.up(FD.java:253) at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288) at org.jgroups.protocols.MERGE3.up(MERGE3.java:290) at org.jgroups.protocols.Discovery.up(Discovery.java:359) at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2616) at org.jgroups.protocols.TP.passMessageUp(TP.java:1263) at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825) at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_38] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_38] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38] See the server log here: https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EAP6/view/EAP6-Clustering/view/EAP6-Failover/job/eap-6x-failover-http-session-jvmkill-dist-async/35/artifact/report/config/jboss-perf21/server.log
Unfortunately graceful cluster shutdown is not yet supported by Infinispan, thus not supported by the AS. See https://issues.jboss.org/browse/ISPN-1239
Update from EAP 6.1.0.ER8 testing cycle: This issue was not seen during this cycle. This can mean that it was either fixed unintentionally (maybe as a by-product of the Infinispan upgrade to 5.2.6.Final) or this issue just became rarer. Either way, we decided not to close this issue.
Does not appear to be an issue anymore.
Not seen during 6.2.0 either.