Description of problem: I am seeing performance degradation of certain perf tests in JPP 6.1.ER1 compared to JPP 6.0.GA. Seems that only standalone non-clustered setup is affected. Stacktrace of loaded server shows lots of threads waiting for a lock at: Http-perf11/10.16.88.189:8080-2000" daemon prio=10 tid=0x00007f2e10e1d000 nid=0x4cb7 waiting on condition [0x00007f2d3387a000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0000000705afcdf8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197) at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945) at org.infinispan.util.concurrent.locks.StripedLock.acquireLock(StripedLock.java:98) at org.infinispan.loaders.LockSupportCacheStore.lockForWriting(LockSupportCacheStore.java:92) at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:211) at org.infinispan.loaders.AbstractCacheStore.applyModifications(AbstractCacheStore.java:126) at org.infinispan.loaders.AbstractCacheStore.commit(AbstractCacheStore.java:163) at org.infinispan.interceptors.CacheStoreInterceptor.commitCommand(CacheStoreInterceptor.java:161) at org.infinispan.interceptors.CacheStoreInterceptor.visitCommitCommand(CacheStoreInterceptor.java:143) Also, if <distributable/> element is removed from portal.war's web.xml, performance is back to normal. Results of single node stress test scaling only to 1500 clients: https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EPP/view/EPP/view/6.1/view/Performance/view/Stress-tests/job/epp6_portal_perf_singlenode/36/ Results of single node stress with <distributable/> element removed scaling up to 7000 clients: https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EPP/view/EPP/view/6.1/view/Performance/view/Stress-tests/job/epp6_portal_perf_singlenode/33/ Similar results obtained in singlenode soak test.
Created attachment 761230 [details] Stacktrace of a loaded server.
Unfortunately, this is to be expected with such scenario in default configurations, because in 6.1 added support for session persistence has been added to non-HA profile. The sessions now survive app upgrades, restarts, crashes, etc. The cache is *write-through*; compared to 6.0 where there has been no session persistence at all. So in the end, its apple-to-oranges comparison. To revert to original 6.0 behaviour, simply remove the 'local-web' cache store. * Option to investigate to keep the functionality but configure the cache store as write-behind, see https://docs.jboss.org/author/display/ISPN/Write-Through+And+Write-Behind+Caching * Note that standalone-full-ha.xml is intended as the production profile IIRC, whereas standalone.xml is mroe or less the developer profile.
Verified (CR3).