Bug 1158559

Summary: C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M
Product: [JBoss] JBoss Data Grid 6 Reporter: Benny <chuchen>
Component: CPP ClientAssignee: Tristan Tarrant <ttarrant>
Status: CLOSED UPSTREAM QA Contact: Alan Field <afield>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.3.0CC: afield, chuffman, mgencur
Target Milestone: ---   
Target Release: 6.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
In Red Hat JBoss Data Grid, when a cache contains a large number of entries, the clear() operation can take an unexpectedly long time and possibly result in communication timeouts. In this case, the exception is reported to the Hot Rod client. This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
Story Points: ---
Clone Of: Environment:
Last Closed: 2025-02-10 03:43:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
CPP Client source code
none
JDG server configuration file none

Description Benny 2014-10-29 16:06:44 UTC
Description of problem:
C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M


Version-Release number of selected component (if applicable):
6.3.0

Testing Environment:
RHEL 6.4 64bit, 2 VMs with 32G memory.
JDG 6.3 2 nodes cluster using a cache of replicated-cache.

1. After added 1M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Node 17node1/clustered timed out
Result: the data didn't clear!

2. After added 1M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it success, total time is 31.000176 seconds.

3. After added 3M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [15 seconds] on key [[B@4191720e] for requestor [Thread[HotRodServerWorker-88,5,main]]! Lock held by [Thread[HotRodServerWorker-87,5,main]]
Result: the data didn't clear!

4. After added 3M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Node 17node1/clustered timed out
Checked the cache data, and total number is 0.

Comment 2 Benny 2014-11-03 07:47:42 UTC
Created attachment 953015 [details]
CPP Client source code

Customer is using CPP client source code to add cache to JDG and then use clear method to clear data.

Comment 3 Benny 2014-11-03 07:50:13 UTC
The data size should be 10 Millions and 30 Millions, sorry.

Comment 4 Benny 2014-11-03 07:58:17 UTC
Created attachment 953019 [details]
JDG server configuration file

Please find the cmbCache in urn:infinispan:server:core:6.1 subsystem.

Comment 5 Benny 2014-11-03 08:09:52 UTC
Reproduce steps:

1. Prepare 2 servers which memory are 32G and OS is RHEL 6.4 64 bit, JDG version is 6.3.0.

2. Change clustered.xml on server1 to add cache which name is cmbCache:
<replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/>

3. Change clustered.xml on server2 to add cache which name is cmbCache:
<replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/>

4. Change JAVA_OPTS in clustered.conf on server1:
-Xms30720m -Xmx30720m

5. Change JAVA_OPTS in clustered.conf on server2:
-Xms30720m -Xmx30720m

6. start node1 on server 1 by following script:
clustered.sh -Djboss.node.name=node1 -Djboss.bind.address=192.168.188.130 -Djboss.bind.address.management=192.168.188.130 -Djboss.socket.binding.port-offset=0

7. start node2 on server 2 by following script:
clustered.sh -Djboss.node.name=node2 -Djboss.bind.address=192.168.188.131 -Djboss.bind.address.management=192.168.188.131 -Djboss.socket.binding.port-offset=0

8. run the CPP client to add 30 Millions data into cmbCache by connecting to the node1, the data will be replicated to node2 too

9. run the CPP client to clear data then it will show exception. Exception is not thrown out all the time but very offen.

Comment 8 Red Hat Bugzilla 2025-02-10 03:43:26 UTC
This product has been discontinued or is no longer tracked in Red Hat Bugzilla.