Description of problem: C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M Version-Release number of selected component (if applicable): 6.3.0 Testing Environment: RHEL 6.4 64bit, 2 VMs with 32G memory. JDG 6.3 2 nodes cluster using a cache of replicated-cache. 1. After added 1M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed. Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Node 17node1/clustered timed out Result: the data didn't clear! 2. After added 1M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it success, total time is 31.000176 seconds. 3. After added 3M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed. Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [15 seconds] on key [[B@4191720e] for requestor [Thread[HotRodServerWorker-88,5,main]]! Lock held by [Thread[HotRodServerWorker-87,5,main]] Result: the data didn't clear! 4. After added 3M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed. Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Node 17node1/clustered timed out Checked the cache data, and total number is 0.
Created attachment 953015 [details] CPP Client source code Customer is using CPP client source code to add cache to JDG and then use clear method to clear data.
The data size should be 10 Millions and 30 Millions, sorry.
Created attachment 953019 [details] JDG server configuration file Please find the cmbCache in urn:infinispan:server:core:6.1 subsystem.
Reproduce steps: 1. Prepare 2 servers which memory are 32G and OS is RHEL 6.4 64 bit, JDG version is 6.3.0. 2. Change clustered.xml on server1 to add cache which name is cmbCache: <replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/> 3. Change clustered.xml on server2 to add cache which name is cmbCache: <replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/> 4. Change JAVA_OPTS in clustered.conf on server1: -Xms30720m -Xmx30720m 5. Change JAVA_OPTS in clustered.conf on server2: -Xms30720m -Xmx30720m 6. start node1 on server 1 by following script: clustered.sh -Djboss.node.name=node1 -Djboss.bind.address=192.168.188.130 -Djboss.bind.address.management=192.168.188.130 -Djboss.socket.binding.port-offset=0 7. start node2 on server 2 by following script: clustered.sh -Djboss.node.name=node2 -Djboss.bind.address=192.168.188.131 -Djboss.bind.address.management=192.168.188.131 -Djboss.socket.binding.port-offset=0 8. run the CPP client to add 30 Millions data into cmbCache by connecting to the node1, the data will be replicated to node2 too 9. run the CPP client to clear data then it will show exception. Exception is not thrown out all the time but very offen.
This product has been discontinued or is no longer tracked in Red Hat Bugzilla.