Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1158559 - C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M
C++ HotRod Client, RemoteCache.clear() will throw out exception when data is ...
Status: ASSIGNED
Product: JBoss Data Grid 6
Classification: JBoss
Component: CPP Client (Show other bugs)
6.3.0
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 6.4.0
Assigned To: Tristan Tarrant
Alan Field
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2014-10-29 12:06 EDT by Benny
Modified: 2018-09-12 18:33 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
In Red Hat JBoss Data Grid, when a cache contains a large number of entries, the clear() operation can take an unexpectedly long time and possibly result in communication timeouts. In this case, the exception is reported to the Hot Rod client. This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
CPP Client source code (253.76 KB, application/zip)
2014-11-03 02:47 EST, Benny
no flags Details
JDG server configuration file (14.59 KB, application/xml)
2014-11-03 02:58 EST, Benny
no flags Details

  None (edit)
Description Benny 2014-10-29 12:06:44 EDT
Description of problem:
C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M


Version-Release number of selected component (if applicable):
6.3.0

Testing Environment:
RHEL 6.4 64bit, 2 VMs with 32G memory.
JDG 6.3 2 nodes cluster using a cache of replicated-cache.

1. After added 1M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Node 17node1/clustered timed out
Result: the data didn't clear!

2. After added 1M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it success, total time is 31.000176 seconds.

3. After added 3M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [15 seconds] on key [[B@4191720e] for requestor [Thread[HotRodServerWorker-88,5,main]]! Lock held by [Thread[HotRodServerWorker-87,5,main]]
Result: the data didn't clear!

4. After added 3M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Node 17node1/clustered timed out
Checked the cache data, and total number is 0.
Comment 2 Benny 2014-11-03 02:47:42 EST
Created attachment 953015 [details]
CPP Client source code

Customer is using CPP client source code to add cache to JDG and then use clear method to clear data.
Comment 3 Benny 2014-11-03 02:50:13 EST
The data size should be 10 Millions and 30 Millions, sorry.
Comment 4 Benny 2014-11-03 02:58:17 EST
Created attachment 953019 [details]
JDG server configuration file

Please find the cmbCache in urn:infinispan:server:core:6.1 subsystem.
Comment 5 Benny 2014-11-03 03:09:52 EST
Reproduce steps:

1. Prepare 2 servers which memory are 32G and OS is RHEL 6.4 64 bit, JDG version is 6.3.0.

2. Change clustered.xml on server1 to add cache which name is cmbCache:
<replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/>

3. Change clustered.xml on server2 to add cache which name is cmbCache:
<replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/>

4. Change JAVA_OPTS in clustered.conf on server1:
-Xms30720m -Xmx30720m

5. Change JAVA_OPTS in clustered.conf on server2:
-Xms30720m -Xmx30720m

6. start node1 on server 1 by following script:
clustered.sh -Djboss.node.name=node1 -Djboss.bind.address=192.168.188.130 -Djboss.bind.address.management=192.168.188.130 -Djboss.socket.binding.port-offset=0

7. start node2 on server 2 by following script:
clustered.sh -Djboss.node.name=node2 -Djboss.bind.address=192.168.188.131 -Djboss.bind.address.management=192.168.188.131 -Djboss.socket.binding.port-offset=0

8. run the CPP client to add 30 Millions data into cmbCache by connecting to the node1, the data will be replicated to node2 too

9. run the CPP client to clear data then it will show exception. Exception is not thrown out all the time but very offen.

Note You need to log in before you can comment on or make changes to this bug.