Bug 974549

Summary: Low non-clusterd performance compared to 6.0
Product: [JBoss] JBoss Enterprise Portal Platform 6 Reporter: Dominik Pospisil <dpospisi>
Component: PortalAssignee: Default User <jbpapp-maint>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.1.0CC: bdawidow, epp-bugs, rhusar
Target Milestone: ER03   
Target Release: 6.1.0   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-07 09:22:45 EST Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Description Flags
Stacktrace of a loaded server. none

Description Dominik Pospisil 2013-06-14 08:04:56 EDT
Description of problem:

I am seeing performance degradation of certain perf tests in JPP 6.1.ER1 compared to JPP 6.0.GA.

Seems that only standalone non-clustered setup is affected. Stacktrace of loaded server shows lots of threads waiting for a lock at:

Http-perf11/" daemon prio=10 tid=0x00007f2e10e1d000 nid=0x4cb7 waiting on condition [0x00007f2d3387a000]
        java.lang.Thread.State: WAITING (parking)
             at sun.misc.Unsafe.park(Native Method)
             - parking to wait for  <0x0000000705afcdf8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
             at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
             at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945)
             at org.infinispan.util.concurrent.locks.StripedLock.acquireLock(StripedLock.java:98)
             at org.infinispan.loaders.LockSupportCacheStore.lockForWriting(LockSupportCacheStore.java:92)
             at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:211)
             at org.infinispan.loaders.AbstractCacheStore.applyModifications(AbstractCacheStore.java:126)
             at org.infinispan.loaders.AbstractCacheStore.commit(AbstractCacheStore.java:163)
             at org.infinispan.interceptors.CacheStoreInterceptor.commitCommand(CacheStoreInterceptor.java:161)
             at org.infinispan.interceptors.CacheStoreInterceptor.visitCommitCommand(CacheStoreInterceptor.java:143)

Also, if <distributable/> element is removed from portal.war's web.xml, performance is back to normal.

Results of single node stress test scaling only to 1500 clients:

Results of single node stress with <distributable/> element removed scaling up to 7000 clients:

Similar results obtained in singlenode soak test.
Comment 1 Dominik Pospisil 2013-06-14 08:06:48 EDT
Created attachment 761230 [details]
Stacktrace of a loaded server.
Comment 3 Radoslav Husar 2013-07-02 08:36:30 EDT
Unfortunately, this is to be expected with such scenario in default configurations, because in 6.1 added support for session persistence has been added to non-HA profile. The sessions now survive app upgrades, restarts, crashes, etc. The cache is *write-through*; compared to 6.0 where there has been no session persistence at all. So in the end, its apple-to-oranges comparison.

To revert to original 6.0 behaviour, simply remove the 'local-web' cache store.

* Option to investigate to keep the functionality but configure the cache store as write-behind, see https://docs.jboss.org/author/display/ISPN/Write-Through+And+Write-Behind+Caching
* Note that standalone-full-ha.xml is intended as the production profile IIRC, whereas standalone.xml is mroe or less the developer profile.
Comment 5 Dominik Pospisil 2013-10-07 11:08:46 EDT
Verified (CR3).