When mod_proxy_ajp sends a request to a worker node and the time to process that request exceeds the configured timeout, the worker node will be marked as in the error state, stopping all traffic to the node until it is flagged as up again. A remote attacker could use this flaw to trigger a temporary denial of service attack, provided they were able to create a request that caused sufficient processing time to exceed the timeout threshold.
According to upstream security page, this affected versions 2.2.12 - 2.2.21.
This issue affected the version of the httpd package, as shipped with Red Hat Enterprise Linux 6. The httpd version in Red Hat Enterprise Linux 5 was not affected.
This issue affected the version of the httpd package as shipped with JBoss Enterprise Web Server 1, but it was already corrected in version 1.0.2.
The httpd packages in current Fedora versions (F16 and F17) already contain fixed upstream version.
This issue did not affect the version of httpd as shipped with Red Hat Enterprise Linux 5.
This flaw was previously fixed in JBoss Enterprise Web Server (EWS) 1.0.2:
The fix is incorporated via httpd-MODCLUSTER-226.patch.
This issue is relevant when proxy_ajp is used with proxy balancer. This may allow attacker able to trigger the timeout to cause balancer to treat backend servers as failed, causing it to stop forwarding requests to those servers until the recovery timeout elapses. As a consequence, all traffic can be sent to a single backend, degrading performance of the load balanced web site.
If all backend servers are put into an error state, httpd version in Red Hat Enterprise Linux 6 will move one backend from the error state, so normal requests can still be served.
The balancer behavior in httpd version in Red Hat Enterprise Linux 5 is different. Backend error states are cleared with each request to balancer. Hence this bug has no security impact on Red Hat Enterprise Linux 5.
Hi Tomas, would you think this could happen with mod_proxy_http too? I'm experiencing a similar problem you describe in comment 9 but I'm balancing through http not ajp.
It can take several days to appear, everything is working fine when all of a sudden the balancer-manager page shows that the workers lost the routes and apache is only using one of the backends, I tried to readjust the configuration directly in the page but it doesn't work, I have to make a full restart (graceful doesn't work either).
This issue has been addressed in following products:
Red Hat Enterprise Linux 6
Via RHSA-2013:0512 https://rhn.redhat.com/errata/RHSA-2013-0512.html
(In reply to comment #10)
> Hi Tomas, would you think this could happen with mod_proxy_http too? I'm
> experiencing a similar problem you describe in comment 9 but I'm balancing
> through http not ajp.
From a quick test and look at the mod_proxy_http code, I see that timeout of the http requests leads to the "Error reading from remote server" error being sent to the client, without putting backend server to an error state. This mod_proxy_ajp fix makes it behave the same way.
Additionally, this problem is not persistent and the error state is cleared rather shortly. It requires attacker to keep sending timing out request to make most backends in error state.