Red Hat Bugzilla – Bug 979958
Uploding artifacts to s-ramp takes unreasonably long time
Last modified: 2015-08-31 17:23:26 EDT
I am running SOA 6 DR6 with s-ramp-server on a powerful machine (Intel Xeon, 12GB RAM), however uploading artifacts from a client like in quickstarts/overlord/s-ramp/s-ramp-demos-simple-client takes unreasonably long time. For example a 5KB artifact takes almost a minute to get uploaded. Server log-in and querying is fine and quick. Just the upload is very slow. It should take at most a couple of seconds.
In case you could not reproduce, I can run with jProfiler and provide more input. But I already experienced that on 3 different machines. So I suppose this is not isolated.
hmm not sure what happened to my first comment; trying again: On upload we extract and derive information. So it's actually doing a lot of work. If you can hook up the profiler to see that this is what's going that'd be great. We may be able to do this async, but then it's hard to get errors back to the user.
Martin - can you attach the exact artifact (5KB jar file?) that you used?
Created attachment 767731 [details]
I have just uploaded a snapshot from jProfiler. Seems like NIO usage in Infinispan is the culprit...
Len, I used the s-ramp quickstarts - s-ramp-demos-query and s-ramp-demos-simple-client
FYI This example creates over a 100 artifacts during the upload, for each ModeShape persists a derived artifact. Not saying we don't have a performance issue, but it's not a "one file upload".
Horia and Randall offered the following suggestions:
1. Config change: Try a JDBCCache rather then a FileCache: JDBC+H2. This should boost performance on writes. BTW What OS was the Profiler run on?
2. Change code - fewer session.save(), but rather buffer it for a bit.
3. Config change - async Cache, but this is not without risk.
I think 1 would be an easy thing to try. We used to ship with Berkly DB for this reason but the license was not favorable. Can we (you) try H2?
1. I created a new H2 datasource and configured the s-ramp cache like below. The defaul isolation level (READ_COMMITED) leads to timeout, so I used NONE.
I also noticed that there is a missing module dependency - org.infinispan must depend on org.infinispan.cachestore.jdbc. This will be solved with a different BZ.
The OS is Linux x86_64.
<datasource jndi-name="java:jboss/datasource/OverlordDTGov" pool-name="OverlordDTGov" enabled="true" use-java-context="true">
<locking isolation="NONE" />
<mixed-keyed-jdbc-store datasource="java:jboss/datasource/OverlordDTGov" />
<!--file-store relative-to="jboss.server.data.dir" path="modeshape/store/sramp" passivation="false" purge="false"/-->
The user experience with this change is much better. The upload takes significantly less time. With FileCache it even took a very long time to delete the data on the file system (using ext4 with noatime).
Also please note that I do not have any deep experiences with Infinispan, so I might have used sub-optimal configuration. Maybe string-keyed-jdbc-store would work and could provide better results...
S-RAMP has been configured as suggested, both in community and in the product branch. Hopefully this takes care of the problem! See the connected JIRA for more information.
There is no specific srampDS datasource (maybe waiting for the DB schemat tool?)
The cache-container is definitely not updated, after installing FSW+DTGov 6.0.0.ER2, the standalone.xml contains:
<file-store relative-to="jboss.server.data.dir" path="modeshape/store/sramp" passivation="false" purge="false"/>
Which means https://issues.jboss.org/browse/SRAMP-209 hasn't made it to S-RAMP 6.0.0.ER2.
This still hasn't been fixed. No sramp datasource and modeshape cache-conatiner still configured to use file-store instead of DB after full installation of FSW 6.0.0.ER3.
I've installed FSW 6.0.0.ER4 (using the installer). The configuration/standalone.xml still doesn't contain srampDS neither is sramp cache configured to use jdbc-store instead of file-store.
Do I miss something or why is it still not fixed?
These changes were somehow missed. I have made the required changes in the following commit:
Verified in FSW 6.0.0.ER4