Bug 849272 - Memory leak in Apache httpd server.
Memory leak in Apache httpd server.
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: php (Show other bugs)
5.1
x86_64 Linux
unspecified Severity medium
: rc
: ---
Assigned To: Web Stack Team
BaseOS QE - Apps
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-17 19:11 EDT by Harold Pritchett
Modified: 2013-05-29 06:45 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-05-29 06:31:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
all yum logs dating back to original install of system (219.59 KB, application/octet-stream)
2012-08-17 19:11 EDT, Harold Pritchett
no flags Details
output of the "rpm -qa" command (35.72 KB, text/plain)
2012-08-17 19:12 EDT, Harold Pritchett
no flags Details
bash script used to monitor swap use and restart httpd server (1.18 KB, text/plain)
2012-08-17 19:13 EDT, Harold Pritchett
no flags Details
one day's output of the swapmon script (6.03 KB, text/plain)
2012-08-17 19:18 EDT, Harold Pritchett
no flags Details

  None (edit)
Description Harold Pritchett 2012-08-17 19:11:37 EDT
Created attachment 605278 [details]
all yum logs dating back to original install of system

Description of problem:

Our RHEL 5.8 (2.6.18-308.11.1.el5 #1 SMP Fri Jun 15 15:41:53 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux) system has been having problems with running out of swap space since the last application of patches. 

A simple script to monitor the use of swap space (via /proc/swaps) shows the system running comfortably with a small but consistent use of swap until it starts to grow until all swap space is consumed and the system becomes unusable. The script currently does a restart of the httpd daemon when the swap usage reaches 50% and following the restart, usage drobp back to normal Z(about 5% of swap used.) 

I believe the problem to be somewhere in the php system. The last update to Apache httpd was June 23 and no problem was noted. Tomcat5 was last updated in May, and mysql last updated in February of this year. 

On July 28, the following packages were updated:
Jul 28 21:40:25 Updated: php-common-5.1.6-39.el5_8.x86_64 
Jul 28 21:40:26 Updated: php-pdo-5.1.6-39.el5_8.x86_64 
Jul 28 21:40:31 Updated: php-cli-5.1.6-39.el5_8.x86_64 
Jul 28 21:40:33 Updated: php-5.1.6-39.el5_8.x86_64 
Jul 28 21:40:33 Updated: php-mysql-5.1.6-39.el5_8.x86_64 
Jul 28 21:40:33 Updated: php-mbstring-5.1.6-39.el5_8.x86_64 
Jul 28 21:40:33 Updated: php-gd-5.1.6-39.el5_8.x86_64 
Jul 28 21:40:33 Updated: php-ldap-5.1.6-39.el5_8.x86_64 

At this time we started seeing the swap file overflows. We immediately identified the Apache server as the culprit, restarting it released the pages and everything was back to normal. We have not figured out what is triggering this, but it seems to happen every 20 hours or so.


Version-Release number of selected component (if applicable):

rpm -qa | egrep "httpd|php|tomcat|mysql" | sort
  httpd-2.2.3-65.el5_8
  httpd-manual-2.2.3-65.el5_8
  libdbi-dbd-mysql-0.8.1a-1.2.2
  mysql-5.0.95-1.el5_7.1
  mysql-5.0.95-1.el5_7.1
  mysql-connector-odbc-3.51.26r1127-2.el5
  mysql-server-5.0.95-1.el5_7.1
  php-5.1.6-39.el5_8
  php-cli-5.1.6-39.el5_8
  php-common-5.1.6-39.el5_8
  php-gd-5.1.6-39.el5_8
  php-ldap-5.1.6-39.el5_8
  php-mbstring-5.1.6-39.el5_8
  php-mysql-5.1.6-39.el5_8
  php-pdo-5.1.6-39.el5_8
  system-config-httpd-1.3.3.3-1.el5
  tomcat5-5.5.23-0jpp.31.el5_8
  tomcat5-admin-webapps-5.5.23-0jpp.31.el5_8
  tomcat5-common-lib-5.5.23-0jpp.31.el5_8
  tomcat5-jasper-5.5.23-0jpp.31.el5_8
  tomcat5-jasper-javadoc-5.5.23-0jpp.31.el5_8
  tomcat5-jsp-2.0-api-5.5.23-0jpp.31.el5_8
  tomcat5-jsp-2.0-api-javadoc-5.5.23-0jpp.31.el5_8
  tomcat5-server-lib-5.5.23-0jpp.31.el5_8
  tomcat5-servlet-2.4-api-5.5.23-0jpp.31.el5_8
  tomcat5-servlet-2.4-api-javadoc-5.5.23-0jpp.31.el5_8
  tomcat5-webapps-5.5.23-0jpp.31.el5_8


How reproducible:

Happens approximately every 20 hours.

Steps to Reproduce:
1.  Run system normally
2.
3.
  
Actual results:

Memory leak

Expected results:

no memory leak

Additional info:

It appears to be similar to case 00691390.  This was closed with new binary files.  These files are now two years old.
Comment 1 Harold Pritchett 2012-08-17 19:12:41 EDT
Created attachment 605279 [details]
output of the "rpm -qa" command
Comment 2 Harold Pritchett 2012-08-17 19:13:29 EDT
Created attachment 605280 [details]
bash script used to monitor swap use and restart httpd server
Comment 3 Harold Pritchett 2012-08-17 19:18:09 EDT
Created attachment 605281 [details]
one day's output of the swapmon script
Comment 4 Remi Collet 2012-10-10 03:05:22 EDT
According to provided yum log, update was from 5.1.6-34.el5_8 (May 26) to php-common-5.1.6-39.el5_8 (Jul 28)

Can you confirm that release -34 is not affected ? (if you can temporary downgrade to it)

Can you give the setting value of MaxRequestsPerChild (httpd configuration), and if lowering this value could workaround this problem ?
Comment 5 Harold Pritchett 2012-10-16 17:26:32 EDT
From httpd.conf

# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxClients for the lifetime of the server
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule prefork.c>
StartServers       8
MinSpareServers    5
MaxSpareServers   20
ServerLimit      256
MaxClients       256
MaxRequestsPerChild  4000
</IfModule>

What would be a good value of MaxRequestsPerChild to try?

This is a production server, so I will have to schedule time to test release -34 to see if it is affected.
Comment 6 Harold Pritchett 2012-10-16 17:39:11 EDT
A little more checking of httpd.conf shows the following lines:

# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule worker.c>
StartServers         2
MaxClients         150
MinSpareThreads     25
MaxSpareThreads     75
ThreadsPerChild     25
MaxRequestsPerChild  0
</IfModule>

OK, which value is it?  4000 or zero?
Comment 7 Remi Collet 2012-10-17 05:02:56 EDT
You are very probably running in "prefork" mode (php doesn't work in "worker" mode). So configured value is 4000 (default one).

From Apache documentation:

    The MaxRequestsPerChild directive sets the limit on the number of requests
    that an individual child server process will handle. After
    MaxRequestsPerChild requests, the child process will die. If
    MaxRequestsPerChild is 0, then the process will never expire.

That's why I suggest to try to lower this value (as a workaround).

> This is a production server, so I will have to schedule time to test
> release -34 to see if it is affected.

I understand, and will wait
Comment 8 Harold Pritchett 2012-10-22 09:28:02 EDT
I have changed the value of MaxRequestsPerChild to 100 from the previous value of 4,000 and re-booted the machine to release all swap in use.  Since then, the amount of swap space in use has remained constant at zero.  This was approximately 21 hours ago.  I will continue monitoring and update this record if I see an unusual use of swap space.

An attempt to downgrade php to version -34 failed with dependency conflict errors in yum.  I'm not sure I can easily test this option.  In any case, it appears that our problem has been fixed.

If I have not posted back here withing the next week or so, this problem can be closed.
Comment 9 Alexander Lindqvist 2012-11-28 03:45:36 EST
Im also seeing this today.
This is a Xen DomU
First symptoms was that the guest (in this case a webserver) lost all network connection but was otherwise fine. Xm console in to that guest and reboot it solved that. Today httpd had invoked OOM killer and crashed the guest.

Version-Release number of selected component (if applicable):
Fully updated 5.8
Kernel 2.6.18-308.20.1.el5xen on an i686
httpd-2.2.3-65.el5

How reproducible:
Start DomU and let it run. Network connection to DomU stops after about 10-30 hours. OOM killed it after about 24 hours of uptime.

We were using this in httpd.conf

<IfModule prefork.c>
StartServers      10
MinSpareServers   10
MaxSpareServers   20
ServerLimit      256
MaxClients       256
MaxRequestsPerChild  4000
</IfModule>

MaxRequestsPerChild now changed to 100 and we just restarted httpd.

Additional info:


Kernel 2.6.18-308.20.1.el5xen on an i686

xxx.xxxx.se login: INFO: task httpd:4205 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F1A2  2352  4205   1240          4206  4204 (NOTLB)
       dc0fce88 00000286 66558d05 0001f1a2 000000e3 00000007 d40c5aa0 665745ba 
       0001f1a2 0001b8b5 d40c5bac c1fc62e8 ed5483c0 00027967 00000005 000b0f25 
       ed5483c0 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4249 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F19C  2292  4249   1240          4250  4248 (NOTLB)
       debcfe88 00000286 c69c76a4 0001f19c 000000e3 00000007 cd53e000 c6b62490 
       0001f19c 0019adec cd53e10c c1fc62e8 e772b200 e62cffa8 00000005 000af8cf 
       e772b200 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4190 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F1BD  2392  4190   1240          4191  4189 (NOTLB)
       dbf04e88 00000282 8289d3f2 0001f1bd 000000e3 00000007 cbfc0550 828b6c28 
       0001f1bd 00019836 cbfc065c c1fc0724 e7c57e40 0001918c 00000005 000a7e55 
       e7c57e40 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c06273b5>] do_page_fault+0x0/0xc29
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4205 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F1A2  2352  4205   1240          4206  4204 (NOTLB)
       dc0fce88 00000286 66558d05 0001f1a2 000000e3 00000007 d40c5aa0 665745ba 
       0001f1a2 0001b8b5 d40c5bac c1fc62e8 ed5483c0 00027967 00000005 000b0f25 
       ed5483c0 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4243 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F1B7  2284  4243   1240          4244  4242 (NOTLB)
       eb2f8e88 00000286 75a7c073 0001f1b7 000000e3 00000007 d6c9b000 75a9a436 
       0001f1b7 0001e3c3 d6c9b10c c1fbab60 e89ea3c0 000127c0 00000005 000dc3e3 
       e89ea3c0 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4244 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F1AA  1948  4244   1240          4245  4243 (NOTLB)
       d04aee88 00000282 6da8a41a 0001f1aa 000000e3 00000007 e9833550 6dadc568 
       0001f1aa 0005214e e983365c c1fc62e8 ed548040 00034240 00000005 000e2a0c 
       ed548040 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c05f7b62>] __tcp_push_pending_frames+0x69c/0x751
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4249 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F19C  2292  4249   1240          4250  4248 (NOTLB)
       debcfe88 00000286 c69c76a4 0001f19c 000000e3 00000007 cd53e000 c6b62490 
       0001f19c 0019adec cd53e10c c1fc62e8 e772b200 e62cffa8 00000005 000af8cf 
       e772b200 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4283 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F1B8  2204  4283   1240          4284  4282 (NOTLB)
       d4c44e88 00000286 6b6e42c1 0001f1b8 000000e3 00000007 dc1fa550 6b736a6c 
       0001f1b8 000527ab dc1fa65c c1fbab60 d5ef3200 0001d888 00000005 000a90f3 
       d5ef3200 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4310 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F1BD  2292  4310   1240          4311  4309 (NOTLB)
       eabe5e88 00000282 d8be068d 0001f1bd 000000e3 00000008 cba52550 d8d47071 
       0001f1bd 001669e4 cba5265c c1fc0724 d95a0580 e64db4e4 00000005 000af9fb 
       d95a0580 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
INFO: task httpd:4324 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
httpd         D 0001F343  1948  4324   1240          4325  4323 (NOTLB)
       e5932e88 00000282 47ba438e 0001f343 000000e3 00000007 d3066aa0 47bc7d78 
       0001f343 000239ea d3066bac c1fbab60 c9081040 0003c9cc 00000005 000de1b6 
       c9081040 00000006 00000000 000000e3 000000ea c04c81e0 ec28aec0 ec28aebc 
Call Trace:
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<c0625765>] __mutex_lock_slowpath+0x4d/0x7c
 [<c06257a3>] .text.lock.mutex+0xf/0x14
 [<c04552f0>] generic_file_aio_write+0x45/0xac
 [<ee09cea1>] ext3_file_write+0x19/0x83 [ext3]
 [<c0474003>] do_sync_write+0xb6/0xf1
 [<c05f7b62>] __tcp_push_pending_frames+0x69c/0x751
 [<c0432293>] autoremove_wake_function+0x0/0x2d
 [<c0473f4d>] do_sync_write+0x0/0xf1
 [<c047488c>] vfs_write+0xa1/0x143
 [<c0474eb6>] sys_write+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
agetty invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0
 [<c0456933>] out_of_memory+0x72/0x1a3
 [<c0457f6b>] __alloc_pages+0x24e/0x2cf
 [<c046a644>] read_swap_cache_async+0x2f/0xac
 [<c045e63f>] swapin_readahead+0x3a/0x58
 [<c04626ce>] __handle_mm_fault+0xe4d/0x1482
 [<ee065ada>] do_get_write_access+0x435/0x462 [jbd]
 [<c0541362>] tty_wait_until_sent+0xb2/0xb9
 [<c0626450>] _spin_lock_irqsave+0x8/0x28
 [<c04edc6c>] __next_cpu+0x12/0x21
 [<c0419a8a>] find_busiest_group+0x151/0x352
 [<c045337f>] find_get_page+0x3e/0x43
 [<c0627a79>] do_page_fault+0x6c4/0xc29
 [<c0626635>] __reacquire_kernel_lock+0x28/0x39
 [<c0624d4e>] schedule+0x81a/0x84d
 [<c06273b5>] do_page_fault+0x0/0xc29
 [<c04055e7>] error_code+0x2b/0x30
 [<c04f21dc>] __copy_to_user_ll+0xd5/0xdc
 [<c053fb14>] copy_from_read_buf+0x72/0xcf
 [<c05410f2>] read_chan+0x46a/0x5a4
 [<c04c88cc>] file_has_perm+0x7f/0x88
 [<c041ae02>] default_wake_function+0x0/0xc
 [<c0540c88>] read_chan+0x0/0x5a4
 [<c053e005>] tty_read+0x5f/0xb4
 [<c053dfa6>] tty_read+0x0/0xb4
 [<c04749cd>] vfs_read+0x9f/0x141
 [<c0474e53>] sys_read+0x3c/0x63
 [<c0405463>] syscall_call+0x7/0xb
 =======================
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:173
cpu 0 cold: high 62, batch 15 used:58
cpu 1 hot: high 186, batch 31 used:10
cpu 1 cold: high 62, batch 15 used:58
cpu 2 hot: high 186, batch 31 used:25
cpu 2 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:61
cpu 1 hot: high 186, batch 31 used:21
cpu 1 cold: high 62, batch 15 used:60
cpu 2 hot: high 186, batch 31 used:30
cpu 2 cold: high 62, batch 15 used:56
Free pages:       52228kB (504kB HighMem)
Active:157016 inactive:308846 dirty:0 writeback:0 unstable:0 free:13057 slab:8096 mapped-file:712 mapped-anon:473104 pagetables:7565
DMA free:8040kB min:72kB low:88kB high:108kB active:272kB inactive:128kB present:16384kB pages_scanned:2718 all_unreclaimable? yes
lowmem_reserve[]: 0 0 711 1992
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 711 1992
Normal free:43684kB min:3376kB low:4220kB high:5064kB active:7880kB inactive:573660kB present:729080kB pages_scanned:1101174 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 10240
HighMem free:504kB min:512kB low:2028kB high:3544kB active:619912kB inactive:661596kB present:1310728kB pages_scanned:6213837 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 2*128kB 2*256kB 2*512kB 2*1024kB 2*2048kB 0*4096kB = 8040kB
DMA32: empty
Normal: 61*4kB 3070*8kB 604*16kB 46*32kB 3*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 1*4096kB = 43684kB
HighMem: 26*4kB 2*8kB 0*16kB 0*32kB 2*64kB 0*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 504kB
732 pagecache pages
Swap cache: add 532128, delete 532126, find 65613/89178, race 12+90
Free swap  = 0kB
Total swap = 1036184kB
Out of memory: Killed process 4075, UID 48, (httpd).
[root@alpha3 ~]# xm console web
klogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
 [<c0456933>] out_of_memory+0x72/0x1a3
 [<c0457f6b>] __alloc_pages+0x24e/0x2cf
 [<c0459354>] __do_page_cache_readahead+0xc1/0x180
 [<c0455eb2>] filemap_nopage+0x157/0x349
 [<c0461bac>] __handle_mm_fault+0x32b/0x1482
 [<c0624cf4>] schedule+0x7c0/0x84d
 [<c0626450>] _spin_lock_irqsave+0x8/0x28
 [<c0626450>] _spin_lock_irqsave+0x8/0x28
 [<c0432326>] finish_wait+0x2a/0x4b
 [<c0626478>] _spin_lock_irq+0x8/0x18
 [<c04214a4>] do_syslog+0x175/0x321
 [<c0627a79>] do_page_fault+0x6c4/0xc29
 [<c04a579c>] pde_users_dec+0xb/0x29
 [<c06273b5>] do_page_fault+0x0/0xc29
 [<c04055e7>] error_code+0x2b/0x30
 =======================
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:173
cpu 0 cold: high 62, batch 15 used:58
cpu 1 hot: high 186, batch 31 used:10
cpu 1 cold: high 62, batch 15 used:58
cpu 2 hot: high 186, batch 31 used:25
cpu 2 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:61
cpu 1 hot: high 186, batch 31 used:21
cpu 1 cold: high 62, batch 15 used:60
cpu 2 hot: high 186, batch 31 used:30
cpu 2 cold: high 62, batch 15 used:56
Free pages:       52228kB (504kB HighMem)
Active:162159 inactive:303607 dirty:0 writeback:0 unstable:0 free:13057 slab:8096 mapped-file:712 mapped-anon:473104 pagetables:7565
DMA free:8040kB min:72kB low:88kB high:108kB active:272kB inactive:128kB present:16384kB pages_scanned:2910 all_unreclaimable? yes
lowmem_reserve[]: 0 0 711 1992
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 711 1992
Normal free:43684kB min:3376kB low:4220kB high:5064kB active:30080kB inactive:551460kB present:729080kB pages_scanned:1125842 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 10240
HighMem free:504kB min:512kB low:2028kB high:3544kB active:618284kB inactive:662840kB present:1310728kB pages_scanned:6228870 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 2*128kB 2*256kB 2*512kB 2*1024kB 2*2048kB 0*4096kB = 8040kB
DMA32: empty
Normal: 61*4kB 3070*8kB 604*16kB 46*32kB 3*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 1*4096kB = 43684kB
HighMem: 26*4kB 2*8kB 0*16kB 0*32kB 2*64kB 0*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 504kB
732 pagecache pages
Swap cache: add 532128, delete 532126, find 65613/89178, race 12+90
Free swap  = 0kB
Total swap = 1036184kB
httpd invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0
 [<c0456933>] out_of_memory+0x72/0x1a3
 [<c0457f6b>] __alloc_pages+0x24e/0x2cf
 [<c045edd7>] do_wp_page+0x49c/0x8a3
 [<c0434a44>] ktime_get_ts+0x16/0x44
 [<c044bc66>] delayacct_end+0x58/0x7a
 [<c0453596>] sync_page+0x0/0x3b
 [<c044bd35>] __delayacct_blkio_end+0x32/0x35
 [<c06254c1>] __wait_on_bit_lock+0x4b/0x52
 [<c0453509>] __lock_page+0x52/0x59
 [<c0462bcc>] __handle_mm_fault+0x134b/0x1482
 [<c04c81e0>] avc_has_perm+0x3c/0x46
 [<ee065ada>] do_get_write_access+0x435/0x462 [jbd]
 [<ee06527c>] __journal_file_buffer+0x116/0x1ed [jbd]
 [<c04ef372>] prio_tree_insert+0x1b/0x1ef
 [<c0627a79>] do_page_fault+0x6c4/0xc29
 [<c06273b5>] do_page_fault+0x0/0xc29
 [<c04055e7>] error_code+0x2b/0x30
 =======================
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:174
cpu 0 cold: high 62, batch 15 used:58
cpu 1 hot: high 186, batch 31 used:10
cpu 1 cold: high 62, batch 15 used:58
cpu 2 hot: high 186, batch 31 used:25
cpu 2 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:61
cpu 1 hot: high 186, batch 31 used:21
cpu 1 cold: high 62, batch 15 used:60
cpu 2 hot: high 186, batch 31 used:30
cpu 2 cold: high 62, batch 15 used:56
Free pages:       52228kB (504kB HighMem)
Active:171440 inactive:294390 dirty:0 writeback:0 unstable:0 free:13057 slab:8095 mapped-file:712 mapped-anon:473104 pagetables:7565
DMA free:8040kB min:72kB low:88kB high:108kB active:272kB inactive:128kB present:16384kB pages_scanned:3102 all_unreclaimable? yes
lowmem_reserve[]: 0 0 711 1992
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 711 1992
Normal free:43684kB min:3376kB low:4220kB high:5064kB active:44708kB inactive:537216kB present:729080kB pages_scanned:1145748 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 10240
HighMem free:504kB min:512kB low:2028kB high:3544kB active:640780kB inactive:640216kB present:1310728kB pages_scanned:6248337 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 2*128kB 2*256kB 2*512kB 2*1024kB 2*2048kB 0*4096kB = 8040kB
DMA32: empty
Normal: 61*4kB 3070*8kB 604*16kB 46*32kB 3*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 1*4096kB = 43684kB
HighMem: 26*4kB 2*8kB 0*16kB 0*32kB 2*64kB 0*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 504kB
732 pagecache pages
Swap cache: add 532128, delete 532126, find 65613/89178, race 12+90
Free swap  = 0kB
Total swap = 1036184kB
httpd invoked oom-killer: gfp_mask=0x280d2, order=0, oomkilladj=0
 [<c0456933>] out_of_memory+0x72/0x1a3
 [<c0457f6b>] __alloc_pages+0x24e/0x2cf
 [<c046211e>] __handle_mm_fault+0x89d/0x1482
 [<ee06527c>] __journal_file_buffer+0x116/0x1ed [jbd]
 [<c04ef372>] prio_tree_insert+0x1b/0x1ef
 [<c0627a79>] do_page_fault+0x6c4/0xc29
 [<c06273b5>] do_page_fault+0x0/0xc29
 [<c04055e7>] error_code+0x2b/0x30
 =======================
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:175
cpu 0 cold: high 62, batch 15 used:58
cpu 1 hot: high 186, batch 31 used:9
cpu 1 cold: high 62, batch 15 used:58
cpu 2 hot: high 186, batch 31 used:25
cpu 2 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:61
cpu 1 hot: high 186, batch 31 used:21
cpu 1 cold: high 62, batch 15 used:60
cpu 2 hot: high 186, batch 31 used:30
cpu 2 cold: high 62, batch 15 used:56
Free pages:       52228kB (504kB HighMem)
Active:171718 inactive:293990 dirty:0 writeback:0 unstable:0 free:13057 slab:8095 mapped-file:712 mapped-anon:473104 pagetables:7565
DMA free:8040kB min:72kB low:88kB high:108kB active:272kB inactive:128kB present:16384kB pages_scanned:3166 all_unreclaimable? yes
lowmem_reserve[]: 0 0 711 1992
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 711 1992
Normal free:43684kB min:3376kB low:4220kB high:5064kB active:48840kB inactive:532724kB present:729080kB pages_scanned:1187852 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 10240
HighMem free:504kB min:512kB low:2028kB high:3544kB active:637760kB inactive:643108kB present:1310728kB pages_scanned:6288762 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 2*128kB 2*256kB 2*512kB 2*1024kB 2*2048kB 0*4096kB = 8040kB
DMA32: empty
Normal: 61*4kB 3070*8kB 604*16kB 46*32kB 3*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 1*4096kB = 43684kB
HighMem: 26*4kB 2*8kB 0*16kB 0*32kB 2*64kB 0*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 504kB
732 pagecache pages
Swap cache: add 532128, delete 532126, find 65613/89178, race 12+90
Free swap  = 0kB
Total swap = 1036184kB
Comment 10 Alexander Lindqvist 2013-01-09 04:20:23 EST
Same problem on an EL6 KVM guest webserver. Mem usage keeps growing over a couple of days when OOM kicks in. This server has 3GB RAM and 4GB Swap.

From httpd.conf on EL6 webserver

# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxClients for the lifetime of the server
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule prefork.c>
StartServers       2
MinSpareServers    3
MaxSpareServers   3
ServerLimit      75
MaxClients       75
MaxRequestsPerChild  100
</IfModule>
Comment 11 Joe Orton 2013-05-29 06:31:35 EDT
Given that there is no clear reproduction case for a PHP (or httpd) bug described here I am going to close this bug.

If you need help diagnosing or reproducing a particular issue seen with Red Hat Enterprise Linux, please contact Red Hat Support.  http://www.redhat.com/support/
Comment 12 Alexander Lindqvist 2013-05-29 06:45:40 EDT
FYI. We traced this down to a Joomla! CMS Plugin on one of the hosted websites. Removing this plugin solved the problems.

Note You need to log in before you can comment on or make changes to this bug.