Bug 974549 - Low non-clusterd performance compared to 6.0
Low non-clusterd performance compared to 6.0
Status: CLOSED CURRENTRELEASE
Product: JBoss Enterprise Portal Platform 6
Classification: JBoss
Component: Portal (Show other bugs)
6.1.0
Unspecified Unspecified
unspecified Severity unspecified
: ER03
: 6.1.0
Assigned To: Default User
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-14 08:04 EDT by Dominik Pospisil
Modified: 2013-11-07 09:22 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-07 09:22:45 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Stacktrace of a loaded server. (13.40 MB, text/plain)
2013-06-14 08:06 EDT, Dominik Pospisil
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
JBoss Issue Tracker GTNPORTAL-3137 Major Resolved Low non-clusterd performance compared to 6.0 2017-07-05 03:23 EDT

  None (edit)
Description Dominik Pospisil 2013-06-14 08:04:56 EDT
Description of problem:

I am seeing performance degradation of certain perf tests in JPP 6.1.ER1 compared to JPP 6.0.GA.

Seems that only standalone non-clustered setup is affected. Stacktrace of loaded server shows lots of threads waiting for a lock at:

Http-perf11/10.16.88.189:8080-2000" daemon prio=10 tid=0x00007f2e10e1d000 nid=0x4cb7 waiting on condition [0x00007f2d3387a000]
        java.lang.Thread.State: WAITING (parking)
             at sun.misc.Unsafe.park(Native Method)
             - parking to wait for  <0x0000000705afcdf8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
             at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
             at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
             at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945)
             at org.infinispan.util.concurrent.locks.StripedLock.acquireLock(StripedLock.java:98)
             at org.infinispan.loaders.LockSupportCacheStore.lockForWriting(LockSupportCacheStore.java:92)
             at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:211)
             at org.infinispan.loaders.AbstractCacheStore.applyModifications(AbstractCacheStore.java:126)
             at org.infinispan.loaders.AbstractCacheStore.commit(AbstractCacheStore.java:163)
             at org.infinispan.interceptors.CacheStoreInterceptor.commitCommand(CacheStoreInterceptor.java:161)
             at org.infinispan.interceptors.CacheStoreInterceptor.visitCommitCommand(CacheStoreInterceptor.java:143)

Also, if <distributable/> element is removed from portal.war's web.xml, performance is back to normal.

Results of single node stress test scaling only to 1500 clients:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EPP/view/EPP/view/6.1/view/Performance/view/Stress-tests/job/epp6_portal_perf_singlenode/36/

Results of single node stress with <distributable/> element removed scaling up to 7000 clients:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EPP/view/EPP/view/6.1/view/Performance/view/Stress-tests/job/epp6_portal_perf_singlenode/33/

Similar results obtained in singlenode soak test.
Comment 1 Dominik Pospisil 2013-06-14 08:06:48 EDT
Created attachment 761230 [details]
Stacktrace of a loaded server.
Comment 3 Radoslav Husar 2013-07-02 08:36:30 EDT
Unfortunately, this is to be expected with such scenario in default configurations, because in 6.1 added support for session persistence has been added to non-HA profile. The sessions now survive app upgrades, restarts, crashes, etc. The cache is *write-through*; compared to 6.0 where there has been no session persistence at all. So in the end, its apple-to-oranges comparison.

To revert to original 6.0 behaviour, simply remove the 'local-web' cache store.

* Option to investigate to keep the functionality but configure the cache store as write-behind, see https://docs.jboss.org/author/display/ISPN/Write-Through+And+Write-Behind+Caching
* Note that standalone-full-ha.xml is intended as the production profile IIRC, whereas standalone.xml is mroe or less the developer profile.
Comment 5 Dominik Pospisil 2013-10-07 11:08:46 EDT
Verified (CR3).

Note You need to log in before you can comment on or make changes to this bug.