Bug 974549 - Low non-clusterd performance compared to 6.0
Summary: Low non-clusterd performance compared to 6.0
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: JBoss Enterprise Portal Platform 6
Classification: JBoss
Component: Portal
Version: 6.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ER03
: 6.1.0
Assignee: Default User
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-14 12:04 UTC by Dominik Pospisil
Modified: 2013-11-07 14:22 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-11-07 14:22:45 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
Stacktrace of a loaded server. (13.40 MB, text/plain)
2013-06-14 12:06 UTC, Dominik Pospisil
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker GTNPORTAL-3137 0 Major Resolved Low non-clusterd performance compared to 6.0 2017-07-05 07:23:59 UTC

Description Dominik Pospisil 2013-06-14 12:04:56 UTC
Description of problem:

I am seeing performance degradation of certain perf tests in JPP 6.1.ER1 compared to JPP 6.0.GA.

Seems that only standalone non-clustered setup is affected. Stacktrace of loaded server shows lots of threads waiting for a lock at:

Http-perf11/10.16.88.189:8080-2000" daemon prio=10 tid=0x00007f2e10e1d000 nid=0x4cb7 waiting on condition [0x00007f2d3387a000]
        java.lang.Thread.State: WAITING (parking)
             at sun.misc.Unsafe.park(Native Method)
             - parking to wait for  <0x0000000705afcdf8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
             at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
             at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945)
             at org.infinispan.util.concurrent.locks.StripedLock.acquireLock(StripedLock.java:98)
             at org.infinispan.loaders.LockSupportCacheStore.lockForWriting(LockSupportCacheStore.java:92)
             at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:211)
             at org.infinispan.loaders.AbstractCacheStore.applyModifications(AbstractCacheStore.java:126)
             at org.infinispan.loaders.AbstractCacheStore.commit(AbstractCacheStore.java:163)
             at org.infinispan.interceptors.CacheStoreInterceptor.commitCommand(CacheStoreInterceptor.java:161)
             at org.infinispan.interceptors.CacheStoreInterceptor.visitCommitCommand(CacheStoreInterceptor.java:143)

Also, if <distributable/> element is removed from portal.war's web.xml, performance is back to normal.

Results of single node stress test scaling only to 1500 clients:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EPP/view/EPP/view/6.1/view/Performance/view/Stress-tests/job/epp6_portal_perf_singlenode/36/

Results of single node stress with <distributable/> element removed scaling up to 7000 clients:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EPP/view/EPP/view/6.1/view/Performance/view/Stress-tests/job/epp6_portal_perf_singlenode/33/

Similar results obtained in singlenode soak test.

Comment 1 Dominik Pospisil 2013-06-14 12:06:48 UTC
Created attachment 761230 [details]
Stacktrace of a loaded server.

Comment 3 Radoslav Husar 2013-07-02 12:36:30 UTC
Unfortunately, this is to be expected with such scenario in default configurations, because in 6.1 added support for session persistence has been added to non-HA profile. The sessions now survive app upgrades, restarts, crashes, etc. The cache is *write-through*; compared to 6.0 where there has been no session persistence at all. So in the end, its apple-to-oranges comparison.

To revert to original 6.0 behaviour, simply remove the 'local-web' cache store.

* Option to investigate to keep the functionality but configure the cache store as write-behind, see https://docs.jboss.org/author/display/ISPN/Write-Through+And+Write-Behind+Caching
* Note that standalone-full-ha.xml is intended as the production profile IIRC, whereas standalone.xml is mroe or less the developer profile.

Comment 5 Dominik Pospisil 2013-10-07 15:08:46 UTC
Verified (CR3).


Note You need to log in before you can comment on or make changes to this bug.