Hide Forgot
project_key: EDG a run with numVirtualNodes=1 achieves max throughput of cca 180K ops/sec (see https://hudson.qa.jboss.com/hudson/view/EDG6/job/edg-60-stress-client-hotrod-size4/66/artifact/report/log.txt) a run with numVirtualNodes=500 stabilizes around cca 45K ops/sec (see https://hudson.qa.jboss.com/hudson/view/EDG6/job/edg-60-stress-client-hotrod-size4/68/artifact/report/log.txt)
jprofiler snapshot, four nodes, perf17 was profiled, 1000 clients, 1 iteration https://hudson.qa.jboss.com/hudson/view/EDG6/job/edg-60-stress-client-hotrod-size4/70/artifact/report/jprofiler-snapshot.jps
there's something very suspicious there: all the traffic comes via JGroups replication channel: 19.7% - 811 s - 139,656 inv. org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle and almost none via netty server interface: 0.3% - 13,367 ms - 484 inv. org.infinispan.server.core.AbstractProtocolDecoder.messageReceived my first suspect is hotrod client not routing properly - perf17 isn't getting any requests.
in my laptop I ran 4 instances bound to test1-test4 and hotrod client routest to only two: test1 and test4 digging further ...
I haven't checked the profiler data or anything but as a FYI: As noted in https://issues.jboss.org/browse/ISPN-1090?focusedCommentId=12612485&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12612485, the virtual topologies sent back in this version of the protocol are not the most efficient and we have some improvements coming up in version 2.
sending of the topologies shouldn't be a performance issue, they're sent only once per topology change, right ?
Indeed. That's why I noted it as a FYI, cos I don't think it's particularly relevant here. If topology was been sent too often, you'd be able to spot by too many calls in HotRodEncoder to writeHashTopologyHeader() method.
Link: Added: This issue depends ISPN-1273
the performance effect of ISPN-1273 is clear to me now: hot rod client gets only last hash id for each server - this means hash ids are likely to end up very closely to each other on the hash wheel - which means picking the first two ones from whatever server count. this means hotrod client contacts only two of the four servers -> ending up with very inefficient unnecessary network hopping.
ISPN-1273 is resolved now, so this should be closed?
I'm gonna try a run with snapshot and then I'll close it...
Docs QE Status: Removed: NEW