Bug 1158559 - C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M
Summary: C++ HotRod Client, RemoteCache.clear() will throw out exception when data is ...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: JBoss Data Grid 6
Classification: JBoss
Component: CPP Client
Version: 6.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 6.4.0
Assignee: Tristan Tarrant
QA Contact: Alan Field
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-29 16:06 UTC by Benny
Modified: 2025-02-10 03:43 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2025-02-10 03:43:26 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
CPP Client source code (253.76 KB, application/zip)
2014-11-03 07:47 UTC, Benny
no flags Details
JDG server configuration file (14.59 KB, application/xml)
2014-11-03 07:58 UTC, Benny
no flags Details

Description Benny 2014-10-29 16:06:44 UTC
Description of problem:
C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M


Version-Release number of selected component (if applicable):
6.3.0

Testing Environment:
RHEL 6.4 64bit, 2 VMs with 32G memory.
JDG 6.3 2 nodes cluster using a cache of replicated-cache.

1. After added 1M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Node 17node1/clustered timed out
Result: the data didn't clear!

2. After added 1M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it success, total time is 31.000176 seconds.

3. After added 3M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [15 seconds] on key [[B@4191720e] for requestor [Thread[HotRodServerWorker-88,5,main]]! Lock held by [Thread[HotRodServerWorker-87,5,main]]
Result: the data didn't clear!

4. After added 3M data into JDG cluster, wait 5 minutes, run RemoteCache.clear(), it failed.
Failed with error: io.netty.handler.codec.DecoderException: org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Node 17node1/clustered timed out
Checked the cache data, and total number is 0.

Comment 2 Benny 2014-11-03 07:47:42 UTC
Created attachment 953015 [details]
CPP Client source code

Customer is using CPP client source code to add cache to JDG and then use clear method to clear data.

Comment 3 Benny 2014-11-03 07:50:13 UTC
The data size should be 10 Millions and 30 Millions, sorry.

Comment 4 Benny 2014-11-03 07:58:17 UTC
Created attachment 953019 [details]
JDG server configuration file

Please find the cmbCache in urn:infinispan:server:core:6.1 subsystem.

Comment 5 Benny 2014-11-03 08:09:52 UTC
Reproduce steps:

1. Prepare 2 servers which memory are 32G and OS is RHEL 6.4 64 bit, JDG version is 6.3.0.

2. Change clustered.xml on server1 to add cache which name is cmbCache:
<replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/>

3. Change clustered.xml on server2 to add cache which name is cmbCache:
<replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/>

4. Change JAVA_OPTS in clustered.conf on server1:
-Xms30720m -Xmx30720m

5. Change JAVA_OPTS in clustered.conf on server2:
-Xms30720m -Xmx30720m

6. start node1 on server 1 by following script:
clustered.sh -Djboss.node.name=node1 -Djboss.bind.address=192.168.188.130 -Djboss.bind.address.management=192.168.188.130 -Djboss.socket.binding.port-offset=0

7. start node2 on server 2 by following script:
clustered.sh -Djboss.node.name=node2 -Djboss.bind.address=192.168.188.131 -Djboss.bind.address.management=192.168.188.131 -Djboss.socket.binding.port-offset=0

8. run the CPP client to add 30 Millions data into cmbCache by connecting to the node1, the data will be replicated to node2 too

9. run the CPP client to clear data then it will show exception. Exception is not thrown out all the time but very offen.

Comment 8 Red Hat Bugzilla 2025-02-10 03:43:26 UTC
This product has been discontinued or is no longer tracked in Red Hat Bugzilla.


Note You need to log in before you can comment on or make changes to this bug.