Bug 1044868

Summary: A Lot of Threads Wait to the Lock in a org.apache.commons.pool.impl.GenericKeyedObjectPool in the Hot Rod Client
Product: [JBoss] JBoss Data Grid 6 Reporter: ksuzumur
Component: InfinispanAssignee: Tristan Tarrant <ttarrant>
Status: CLOSED NOTABUG QA Contact: Martin Gencur <mgencur>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.1.0CC: dereed, jdg-bugs, mmarkus, tkimura
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-14 05:42:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description ksuzumur 2013-12-19 07:14:22 UTC
Created attachment 838781 [details]
Thread dump

Description of problem:
In a performance test using the Hot Rod client, there are a lot of threads that wait to the lock in org.apache.commons.pool.impl.GenericKeyedObjectPool. The Hot Rod client code has a RemoteCacheManager and a RemoteCache that are used by a lot of the client threads. 

Here is one thread of this case:
~~~
"Thread-201" prio=10 tid=0x00007ffc48417800 nid=0x5fe6 waiting for monitor entry [0x00007ff9982c1000]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1097)
        - waiting to lock <0x00000000f0f8a878> (a org.apache.commons.pool.impl.GenericKeyedObjectPool)
        at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:271)
        at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:168)
        at org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.getTransport(AbstractKeyOperation.java:61)
        at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:67)
        at org.infinispan.client.hotrod.impl.RemoteCacheImpl.putIfAbsent(RemoteCacheImpl.java:244)
        at org.infinispan.CacheSupport.putIfAbsent(CacheSupport.java:78)
        at sample.Main$ClientThread.run(Main.java:93)
~~~

Version-Release number of selected component (if applicable):
JBoss Data Grid / 6.1.0

How reproducible:
I will attach the Hot Rod client code and the thread dump.

Steps to Reproduce:
Run the client program

Actual results:

Expected results:

Additional info:

Comment 2 ksuzumur 2013-12-19 07:17:25 UTC
*** Bug 1044864 has been marked as a duplicate of this bug. ***

Comment 4 Tristan Tarrant 2013-12-20 08:23:22 UTC
This is not a bug: the user has not configured the connection pool of the client.

Modify the configuration properties by adding the following (just an example):

maxActive=10
maxTotal=10
maxIdle=10
minIdle=5

Comment 6 Mircea Markus 2014-01-09 12:34:59 UTC
So the test runs 30k ops/thread and 100 threads on a HR client. The test terminates in 100 secs, so the throughput is 30k operations per second. This throughput seems to be aligned with what QE has seen as a performance:

<email from radim vansa>
I suppose they're running the clients against single server, and clients on another one, right? I believe that these numbers are OK. In our lab with 10 threads accessing cache in local mode (one server) via HotRod we get 26600 operations [1].

Radim

[1] http://perfrepouser:perfrepouser1.@jawa36.mw.lab.eng.brq.redhat.com:8080/repo/exec/1415

</email from radim vansa>

I agree with Tristan that this is not a bug (no functional issue), but at most a performance issue. Are they looking for a certain throughput that the HR client doesn't provide?