Created attachment 340704 [details] httpd.conf Description of problem: Everyday after a Nessus scan httpd runs into a memory leak. Version-Release number of selected component (if applicable): httpd-2.2.3-22.el5.centos How reproducible: Always after Nessus scan Steps to Reproduce: Actual results: 4440 apache 16 0 106m 21m 964 S 0.0 8.8 0:05.01 httpd 4438 apache 15 0 104m 19m 1028 S 0.0 7.7 0:05.97 httpd 4449 apache 15 0 81048 18m 964 S 0.0 7.5 0:03.85 httpd 4452 apache 16 0 35396 18m 964 S 0.0 7.4 0:01.08 httpd 4448 apache 15 0 86756 18m 964 S 0.0 7.2 0:02.34 httpd 4446 apache 16 0 76512 16m 996 S 0.0 6.7 0:02.23 httpd 4437 apache 15 0 101m 16m 960 S 0.0 6.7 0:04.30 httpd 4455 apache 16 0 16992 6044 1392 S 0.0 2.4 0:00.32 httpd (note that I killed already 3 of the approx. 100m processes to get system usable). Expected results: No such memory leak Additional info: System (VM) is configured with 256 MB main memory and 512 MB swap, httpd eats all of them. httpd is listen on 443 (HTTPS) only And the strangest thing is, that access from the nessus server is prohibited by location directive at all. I have still some processes consuming the memory, so if one provides me debug commands, I can take a look into the process.
Created attachment 340705 [details] ssl.conf
Created attachment 340707 [details] ssl_error_log
Created attachment 340708 [details] /var/log/messages
We are experiencing the same issue, however using Apache and Tomcat via Proxy AJP to serve a Shibboleth IdP on CentOS 5.3 - stock RPMs. httpd-2.2.3-22.el5.centos tomcat5-5.5.23-0jpp.7.el5_2.1 Since upgrading to 5.3 we have noticed that overnight all 4GB of physical RAM and all 4GB of swap is eaten and the web service unresponsive. We can still SSH into the machine and restart Apache. This rectifies the issue. These are production boxes and subject to numerous connections, we are trying to isolate the trigger. However, the exact same configuration and applications being served under 5.2 exhibit no issue. 28011 apache 15 0 766m 336m 5532 S 0.0 8.3 0:16.17 /usr/sbin/httpd 28013 apache 15 0 769m 329m 5540 S 0.0 8.1 0:16.30 /usr/sbin/httpd 28016 apache 15 0 771m 328m 5540 S 0.0 8.1 0:14.85 /usr/sbin/httpd 28015 apache 15 0 761m 326m 5540 S 0.0 8.1 0:14.29 /usr/sbin/httpd 28012 apache 15 0 739m 320m 5540 S 0.0 7.9 0:16.04 /usr/sbin/httpd 28017 apache 15 0 712m 319m 5556 S 0.0 7.9 0:15.91 /usr/sbin/httpd 29789 apache 15 0 628m 317m 5536 S 0.0 7.8 0:12.17 /usr/sbin/httpd 28018 apache 15 0 694m 313m 5552 S 0.0 7.7 0:13.09 /usr/sbin/httpd 28014 apache 15 0 697m 311m 5532 S 0.0 7.7 0:12.71 /usr/sbin/httpd 13093 apache 15 0 507m 295m 5532 S 0.0 7.3 0:09.97 /usr/sbin/httpd 3381 root 18 0 19096 6128 5444 S 0.0 0.1 3:57.97 /usr/sbin/httpd Mem: 4148372k total, 3978352k used, 170020k free, 5472k buffers Swap: 4192924k total, 4002776k used, 190148k free, 55352k cached
I digged now through the logs for the first occurance and probably found evidences: Apr 16 23:30:08 system kernel: snmpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Apr 16 23:30:24 system kernel: gam_server invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Apr 16 23:30:38 system kernel: httpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Apr 16 23:30:56 system kernel: httpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Apr 16 23:31:18 system kernel: httpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 => First occurance Apr 16 23:30:08 UTC Reboots: Feb 2 16:02:58 system kernel: Linux version 2.6.18-92.1.22.el5 Apr 16 16:23:14 system kernel: Linux version 2.6.18-128.1.6.el5 Apache error log: [Thu Apr 16 16:14:51 2009] [notice] caught SIGTERM, shutting down [Thu Apr 16 16:23:24 2009] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0 => Start after reboot Automatic install of new Apache # zgrep httpd /var/log/yum.log.1.gz Sep 25 11:34:38 Installed: httpd - 2.2.3-11.el5_1.centos.3.i386 Nov 13 16:29:12 Updated: httpd.i386 2.2.3-11.el5_2.centos.4 Apr 01 16:10:01 Updated: httpd.i386 2.2.3-22.el5.centos (but this do not cause any restart of httpd). Apache Log Nessus - - [16/Apr/2009:23:26:53 +0000] "/" 403 - Nessus - - [16/Apr/2009:23:26:59 +0000] "/" 403 - Nessus - - [16/Apr/2009:23:26:59 +0000] "/" 403 - Nessus - - [16/Apr/2009:23:26:59 +0000] "/" 403 - Nessus - - [16/Apr/2009:23:26:59 +0000] "/intruvert/jsp/admin/Login.jsp" 403 311 Nessus - - [16/Apr/2009:23:26:59 +0000] "" 400 318 Nessus - - [16/Apr/2009:23:26:59 +0000] "etc/passwd" 400 306 Nessus - - [16/Apr/2009:23:26:59 +0000] "/scripts/logbook.pl" 403 300 Nessus - - [16/Apr/2009:23:26:59 +0000] "/cgi-bin/logbook.pl" 403 300 Nessus - - [16/Apr/2009:23:27:00 +0000] "/logbook.pl" 403 292 ... Note, nessus scan also runs before. So at least restart of Apache to new version caused this problem.
We are experimenting the same with squirrelmail on httpd-2.2.3-22 and php-5.1.6-23.2.el5_3 ( it is a RHEL 5.3 installation ). What we are doing to workaround the problem is a "/etc/init.d/httpd reload" every two hours via crontab. If we can provide any kind of help to solve the problem, please tell us. Regards.
Anybody experiencing this problem, please try these packages: http://people.redhat.com/jorton/Tikanga-httpd/ and post feedback here.
After 4 hours using it, problem seems solved. httpd instances have stopped eating memory. Well done.
Hi, We can also confirm that after 12 hours of operation the issue hasn't presented. The memory leak would normally be exhibited after 2 hours of operation. Thanks for you're assistance! J.
Ops.I'm afraid it happens again. Yesterday for several hours httpd processes didn't seemed to exhaust memory, but right now I realize they have taken a lot of memory again, I don't know when: 14612 apache 15 0 398m 355m 12m S 0.0 10.8 0:24.00 httpd 14475 apache 15 0 394m 352m 12m S 0.0 10.7 0:24.03 httpd 14477 apache 21 0 370m 328m 11m S 0.0 10.0 0:22.38 httpd 4297 apache 15 0 211m 169m 12m S 0.0 5.2 0:14.62 httpd 4295 apache 16 0 202m 160m 11m S 0.0 4.9 0:09.61 httpd 4788 apache 15 0 199m 157m 11m S 0.0 4.8 0:09.56 httpd 4934 apache 15 0 194m 152m 11m S 0.0 4.6 0:09.50 httpd 4936 apache 15 0 189m 147m 11m S 0.0 4.5 0:08.87 httpd 4935 apache 15 0 181m 138m 11m S 0.0 4.2 0:10.50 httpd 6040 apache 15 0 164m 121m 11m S 0.0 3.7 0:06.70 httpd 9268 postfix 18 0 164m 120m 1440 S 0.0 3.7 301:40.40 clamd 6041 apache 15 0 158m 115m 11m S 0.0 3.5 0:07.14 httpd 6042 apache 15 0 155m 112m 11m S 0.0 3.4 0:06.18 httpd Strange. Any ideas?
(In reply to comment #11) > Ops.I'm afraid it happens again. Yesterday for several hours httpd processes > didn't seemed to exhaust memory, but right now I realize they have taken a lot > of memory again, I don't know when: It may be an issue unrelated to the mod_ssl leak. Please contact Red Hat Support if you need further help diagnosing the problem.
I am suspecting that the problem is related to the "reload" (kill -HUP) of httpd. After a "httpd restart", memory behavior seems good, but after a "httpd reload" httpd processes start taking more memory. And a "reload" is made each night to rotate the logs. I think that is the point where memory problems begin. I have set a monitor of httpd memory at crontab to verify this; perhaps tomorrow morning I have more info.
Hi, the problem has not happened anymore. I was wrong about my suspects. So I don't understand why the leak seems to happen again. Thanks anyway.
We're also seeing memory leaks in httpd at this scenario: mod_ssl to internet side and mod_proxy without SSL to internal subversion. Leaks ~ 5-10 MB per SVN action, thus after a few SVN actions, the machine runs out of memory. The problem seems to have appeared with RHEL 5.3 here, before we had much more SVN actions without any memory problems. Working workaround (hackish!) with httpd-2.2.3-22.el5 for us until now is: <IfModule prefork.c> MaxRequestsPerChild 10 </IfModule>
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2009-1075.html
We are facing Httpd memory usage issue. Our application is using mod_python and is identified that when ever, Httpd is reload, then memory consumption is increased. It is also verified that the leak seems to be in mod_python. Just want Httpd developers to ensure that there is no problem in httpd reload command. This issue is coming on Centox 5.5 and Centos 6.4 versions. On Centos 6.4, mod_python version is mod_python-3.3.1-15.el6.x86_64. Also, for time being, can you suggest how we can stop httpd reload? We understand that when the logs are rotated using logrotate utility, then it leads to memory increase. Please suggest how can we stop httpd reload.