Bug 1158559
| Summary: | C++ HotRod Client, RemoteCache.clear() will throw out exception when data is more than 1M | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [JBoss] JBoss Data Grid 6 | Reporter: | Benny <chuchen> | ||||||
| Component: | CPP Client | Assignee: | Tristan Tarrant <ttarrant> | ||||||
| Status: | CLOSED UPSTREAM | QA Contact: | Alan Field <afield> | ||||||
| Severity: | unspecified | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | 6.3.0 | CC: | afield, chuffman, mgencur | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | 6.4.0 | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | Known Issue | |||||||
| Doc Text: |
In Red Hat JBoss Data Grid, when a cache contains a large number of entries, the clear() operation can take an unexpectedly long time and possibly result in communication timeouts. In this case, the exception is reported to the Hot Rod client.
This is a known issue in JBoss Data Grid 6.4 and no workaround is currently available for this issue.
|
Story Points: | --- | ||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2025-02-10 03:43:26 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
|
Description
Benny
2014-10-29 16:06:44 UTC
Created attachment 953015 [details]
CPP Client source code
Customer is using CPP client source code to add cache to JDG and then use clear method to clear data.
The data size should be 10 Millions and 30 Millions, sorry. Created attachment 953019 [details]
JDG server configuration file
Please find the cmbCache in urn:infinispan:server:core:6.1 subsystem.
Reproduce steps: 1. Prepare 2 servers which memory are 32G and OS is RHEL 6.4 64 bit, JDG version is 6.3.0. 2. Change clustered.xml on server1 to add cache which name is cmbCache: <replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/> 3. Change clustered.xml on server2 to add cache which name is cmbCache: <replicated-cache name="cmbCache" mode="SYNC" start="EAGER"/> 4. Change JAVA_OPTS in clustered.conf on server1: -Xms30720m -Xmx30720m 5. Change JAVA_OPTS in clustered.conf on server2: -Xms30720m -Xmx30720m 6. start node1 on server 1 by following script: clustered.sh -Djboss.node.name=node1 -Djboss.bind.address=192.168.188.130 -Djboss.bind.address.management=192.168.188.130 -Djboss.socket.binding.port-offset=0 7. start node2 on server 2 by following script: clustered.sh -Djboss.node.name=node2 -Djboss.bind.address=192.168.188.131 -Djboss.bind.address.management=192.168.188.131 -Djboss.socket.binding.port-offset=0 8. run the CPP client to add 30 Millions data into cmbCache by connecting to the node1, the data will be replicated to node2 too 9. run the CPP client to clear data then it will show exception. Exception is not thrown out all the time but very offen. This product has been discontinued or is no longer tracked in Red Hat Bugzilla. |