Bug 1030965 - Number of registered contexts negatively affects mod_cluster performance
Summary: Number of registered contexts negatively affects mod_cluster performance
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: JBoss Enterprise Application Platform 6
Classification: JBoss
Component: mod_cluster
Version: 6.2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ER3
: EAP 6.3.0
Assignee: Jean-frederic Clere
QA Contact: Michal Karm Babacek
Russell Dickenson
URL:
Whiteboard:
Depends On:
Blocks: 1164327 1079156 1084882
TreeView+ depends on / blocked
 
Reported: 2013-11-15 12:26 UTC by Michal Karm Babacek
Modified: 2018-12-05 16:35 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
A performance issue has been identified on the Apache HTTP Server with mod_cluster configured as a load balancer. httpd shared memory operations on the `workers->nodes` table negatively affects the performance of the load balancer. As a result, performance of the httpd load balancer decreases as the number of registered contexts increases. To workaround this issue, attempt to lower the number of registered contexts. To fix this bug, the httpd has been modified to utilize local memory rather than shared memory.
Clone Of:
: 1079156 1164327 (view as bug list)
Environment:
Last Closed: 2014-06-28 15:44:30 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker MODCLUSTER-372 0 Major Closed Number of registered contexts negatively affects mod_cluster performance 2017-06-08 13:58:15 UTC

Description Michal Karm Babacek 2013-11-15 12:26:05 UTC
Follow the https://issues.jboss.org/browse/MODCLUSTER-372

Comment 1 JBoss JIRA Server 2013-11-15 17:11:10 UTC
Michal Babacek <mbabacek> made a comment on jira MODCLUSTER-372

I can see I forgot to add {{ServerLimit    40}}, but it would only allow for MaxClients to be higher than 1920, and actually shouldn't have any bearing on the case of mod_cluster and registered contexts.

Comment 2 JBoss JIRA Server 2013-11-16 09:50:01 UTC
Jean-Frederic Clere <jfclere> made a comment on jira MODCLUSTER-372

Looking in mod_proxy_cluster.c find_node_context_host() I have spotted something very bad:
+++
        /* keep only the contexts corresponding to our balancer */
        if (balancer != NULL) {
            nodeinfo_t *node;
            if (node_storage->read_node(context->node, &node) != APR_SUCCESS)
                continue;
+++
We are reading the shared memory for each context that is _very_ bad.

Comment 3 JBoss JIRA Server 2013-11-18 09:50:37 UTC
Michal Babacek <mbabacek> made a comment on jira MODCLUSTER-372

I've identified these bottle necks:
Three the most expensive functions and their callers:


 # Function {{ap_slotmem_mem}} from mod_slotmem/sharedmem_util.c called by:
  * {{get_context}} from mod_manager/context.c
  * {{get_node}} from mod_manager/node.c
  * {{loc_read_node}} from mod_manager/mod_manager.c
  * {{find_node_context_host}} from mod_proxy_cluster/mod_proxy_cluster.c
  * {{loc_read_context}} mod_manager/context.c
  * {{read_context_table}} mod_proxy_cluster/mod_proxy_cluster.c
  * {{manager_info}} from mod_manager/mod_manager.c

 # Function {{ap_slotmem_do}} from mod_slotmem/sharedmem_util.c called mostly by
  * httpd_request and httpd_core functions from httpd sources.

 # Function {{find_node_context_host}} from mod_proxy_cluster/mod_proxy_cluster.c called by
  * httpd sources, mostly on request_process and request_connection.

So, IMHO, {{find_node_context_host}} is indeed the trouble :)

Attaching the profiling valgrind logs [^callgrind.zip], created as: 
{noformat}
valgrind --tool=callgrind --dump-instr=yes --simulate-cache=yes --collect-jumps=yes --compress-strings=no --compress-pos=no --collect-systime=yes ./httpd -f /tmp/hudson/httpd/conf/httpd.conf -E /tmp/hudson/httpd/logs/httpd.log
{noformat}
with this debug setting:
{code}
<IfModule worker.c>
ThreadLimit         50
StartServers        1
ServerLimit         1
MinSpareThreads     50
MaxSpareThreads     50
MaxClients          50
ThreadsPerChild     50
MaxRequestsPerChild 0
</IfModule>
{code}
and 4 worker nodes, 65 contexts each, and several dozens of concurrent client sessions.

Comment 4 JBoss JIRA Server 2013-11-18 09:57:34 UTC
Michal Babacek <mbabacek> made a comment on jira MODCLUSTER-372

I've identified these bottle necks:
Three the most expensive functions and their callers.They keep this order regardless of the event metric being Instruction fetch/Data read/Data write.

 # Function {{ap_slotmem_mem}} from mod_slotmem/sharedmem_util.c called by:
  * {{get_context}} from mod_manager/context.c
  * {{get_node}} from mod_manager/node.c
  * {{loc_read_node}} from mod_manager/mod_manager.c
  * {{find_node_context_host}} from mod_proxy_cluster/mod_proxy_cluster.c
  * {{loc_read_context}} mod_manager/context.c
  * {{read_context_table}} mod_proxy_cluster/mod_proxy_cluster.c
  * {{manager_info}} from mod_manager/mod_manager.c

 # Function {{ap_slotmem_do}} from mod_slotmem/sharedmem_util.c called mostly by
  * httpd_request and httpd_core functions from httpd sources.

 # Function {{find_node_context_host}} from mod_proxy_cluster/mod_proxy_cluster.c called by
  * httpd sources, mostly on request_process and request_connection.

So, IMHO, {{find_node_context_host}} is indeed the trouble :)

Attaching the profiling valgrind logs [^callgrind.zip], created as: 
{noformat}
valgrind --tool=callgrind --dump-instr=yes --simulate-cache=yes --collect-jumps=yes --compress-strings=no --compress-pos=no --collect-systime=yes ./httpd -f /tmp/hudson/httpd/conf/httpd.conf -E /tmp/hudson/httpd/logs/httpd.log
{noformat}
with this debug setting:
{code}
<IfModule worker.c>
ThreadLimit         50
StartServers        1
ServerLimit         1
MinSpareThreads     50
MaxSpareThreads     50
MaxClients          50
ThreadsPerChild     50
MaxRequestsPerChild 0
</IfModule>
{code}
and 4 worker nodes, 65 contexts each, and several dozens of concurrent client sessions.

Comment 5 JBoss JIRA Server 2013-11-18 21:43:41 UTC
Michal Babacek <mbabacek> made a comment on jira MODCLUSTER-372

Jean-Frederic is experimenting with reading from local instead of shared mem, the preliminary results of my unified tests look very promising:

||test||balancer CPU usage peak||
|1.2.x branch, 4 workers, 61 contexts each|70%|
|1.2.x branch, 4 workers, 1 context each|17%|
|MODCLUSTER-372 branch, 4 workers, 61 contexts each|28%|
|MODCLUSTER-372 branch, 4 workers, 1 context each|17%|

Comment 7 JBoss JIRA Server 2014-02-06 16:18:58 UTC
Jean-Frederic Clere <jfclere> updated the status of jira MODCLUSTER-372 to Resolved

Comment 8 JBoss JIRA Server 2014-02-20 16:07:38 UTC
Michal Babacek <mbabacek> updated the status of jira MODCLUSTER-372 to Reopened

Comment 9 JBoss JIRA Server 2014-03-03 12:15:30 UTC
Michal Babacek <mbabacek> updated the status of jira MODCLUSTER-372 to Resolved

Comment 10 Michal Karm Babacek 2014-03-03 12:42:00 UTC
Fix ported to 1.2.x branch [1], it's ready for EAP6. Follow the linked Jira for more details. It will be verified as soon as the productized bits are ready.

[1] https://github.com/modcluster/mod_cluster/pull/64

Comment 11 Michal Karm Babacek 2014-03-06 10:17:43 UTC
Failed. Please, note DR2 contains the old mod_cluster 1.2.6. The component update is needed. I'm setting DR3 as a new milestone.

Comment 12 Michal Karm Babacek 2014-03-12 10:38:38 UTC
Fail.

mod_cluster was not upgraded. According to [BZ 1050223], it was supposed to be in version 1.2.8, but it is the old 1.2.6.

Comment 14 Michal Karm Babacek 2014-05-14 14:18:44 UTC
Assigning to myself for verification...

Comment 15 Michal Karm Babacek 2014-05-14 14:20:54 UTC
Verified :-)

Note: Don't expect this fix in EAP 6.3.0 Beta, because native libraries are out of the Beta release scope.

Comment 16 Nichola Moore 2014-05-15 04:33:13 UTC
Changed bug type from Known Issue to Bug Fix as part of bz 1097719, taking in to account the comment above.


Note You need to log in before you can comment on or make changes to this bug.