Bug 504342 - RHN Proxy on 64bit bloats httpd until oom-killer triggers
RHN Proxy on 64bit bloats httpd until oom-killer triggers
Status: CLOSED CURRENTRELEASE
Product: Red Hat Network
Classification: Red Hat
Component: RHN/Other (Show other bugs)
RHN Devel
x86_64 Linux
low Severity medium
: ---
: ---
Assigned To: Bryan Kearney
Red Hat Network Quality Assurance
US=108015
:
Depends On: 503187
Blocks: 510541
  Show dependency treegraph
 
Reported: 2009-06-05 13:02 EDT by Miroslav Suchý
Modified: 2013-01-10 05:33 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 503187
Environment:
Last Closed: 2009-08-10 09:02:31 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Miroslav Suchý 2009-06-05 13:02:15 EDT
+++ This bug was initially created as a clone of Bug #503187 +++

Description of problem:

Related to the similar issue on satellite covered in BZ#485532 & BZ#465796

Proxy processes are growing until they trigger oom-killer.


ay 25 07:44:22 mlhw72kx kernel: automount invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 25 07:44:22 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 07:44:29 mlhw72kx kernel: Out of memory: Killed process 18662 (httpd).
May 25 07:44:29 mlhw72kx kernel: nfsd invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
May 25 07:44:29 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 07:44:32 mlhw72kx kernel: nfsd invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
May 25 07:44:32 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 10:05:38 mlhw72kx kernel: ypbind invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 25 10:05:38 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 10:05:42 mlhw72kx kernel: Out of memory: Killed process 27424 (httpd).
May 25 10:05:42 mlhw72kx kernel: portmap invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 25 10:05:42 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 10:10:47 mlhw72kx kernel: nfsd invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
May 25 10:10:47 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 10:10:55 mlhw72kx kernel: Out of memory: Killed process 27415 (httpd).
May 25 10:10:55 mlhw72kx kernel: klogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 25 10:10:55 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 10:14:47 mlhw72kx kernel: c2s invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 25 10:14:47 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 10:14:50 mlhw72kx kernel: Out of memory: Killed process 27416 (httpd).
May 25 10:18:17 mlhw72kx kernel: automount invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 25 10:18:17 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 10:18:21 mlhw72kx kernel: Out of memory: Killed process 27423 (httpd).
May 25 15:52:02 mlhw72kx kernel: s2s invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 25 15:52:02 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 25 15:52:39 mlhw72kx kernel: Out of memory: Killed process 28991 (httpd).
May 25 15:52:39 mlhw72kx kernel: syslogd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 25 15:52:39 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 02:46:21 mlhw72kx kernel: irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oomkilladj=0
May 26 02:46:21 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 02:46:28 mlhw72kx kernel: Out of memory: Killed process 32142 (httpd).
May 26 02:46:28 mlhw72kx kernel: nfsd invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
May 26 02:46:28 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 02:52:25 mlhw72kx kernel: nfsd invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
May 26 02:52:25 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 02:52:41 mlhw72kx kernel: Out of memory: Killed process 7062 (httpd).
May 26 02:55:39 mlhw72kx kernel: nsrexecd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 26 02:55:39 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 02:55:49 mlhw72kx kernel: Out of memory: Killed process 7023 (httpd).
May 26 02:57:32 mlhw72kx kernel: resolver invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 26 02:57:32 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 02:57:37 mlhw72kx kernel: Out of memory: Killed process 7107 (httpd).
May 26 02:57:37 mlhw72kx kernel: nfsd invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
May 26 02:57:43 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 03:01:56 mlhw72kx kernel: httpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 26 03:01:56 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 03:01:57 mlhw72kx kernel: Out of memory: Killed process 7066 (httpd).
May 26 03:01:57 mlhw72kx kernel: httpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
May 26 03:01:58 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 03:02:01 mlhw72kx kernel: nfsd invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
May 26 03:02:01 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 03:12:22 mlhw72kx kernel: nfsd invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
May 26 03:12:22 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
May 26 04:05:11 mlhw72kx kernel: Out of memory: Killed process 7065 (httpd).
May 26 04:05:11 mlhw72kx kernel: crond invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0
May 26 04:05:13 mlhw72kx kernel:  [<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5


top - 16:13:48 up 1 day, 11:48,  1 user,  load average: 0.04, 0.12, 0.06
Tasks: 175 total,   1 running, 174 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.2%us,  0.6%sy,  0.0%ni, 95.6%id,  2.5%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   3995612k total,  3981184k used,    14428k free,   132660k buffers
Swap:  2097144k total,      108k used,  2097036k free,   891432k cached

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
29307 apache    15   0  454m 175m 6616 S  0.0  4.5   0:27.98 httpd              
29305 apache    15   0  453m 174m 6616 S  0.0  4.5   0:28.40 httpd              
29309 apache    15   0  451m 171m 6612 S  0.0  4.4   0:27.87 httpd              
29308 apache    15   0  450m 171m 6616 S  0.0  4.4   0:28.64 httpd              
29313 apache    15   0  450m 171m 6612 S  0.0  4.4   0:28.32 httpd              
29312 apache    16   0  450m 170m 6616 S  0.0  4.4   0:28.08 httpd              
29390 apache    15   0  449m 170m 6616 S  0.0  4.4   0:27.57 httpd              
29389 apache    15   0  447m 168m 6612 S  0.0  4.3   0:27.95 httpd              
29306 apache    15   0  446m 167m 6616 S  0.0  4.3   0:28.31 httpd              
29626 apache    15   0  446m 166m 6616 S  0.0  4.3   0:26.38 httpd              
29627 apache    16   0  445m 166m 6616 S  2.5  4.3   0:26.40 httpd              
29315 apache    15   0  443m 164m 6608 S  0.0  4.2   0:28.84 httpd              
29388 apache    15   0  440m 161m 6612 S  0.0  4.1   0:27.53 httpd              
29330 apache    15   0  439m 159m 6612 S  0.0  4.1   0:28.40 httpd              
29336 apache    15   0  438m 159m 6620 S  0.0  4.1   0:28.23 httpd              
30439 apache    15   0  437m 157m 6604 S  0.0  4.0   0:23.50 httpd              
29625 apache    16   0  422m 143m 6612 S  0.0  3.7   0:26.82 httpd      




The proxy configuration does not have the benefit of Apache2::SizeLimit and so does not limit size growth.  As with Satellite it appears reducing MaxRequestsPerChild to 200 does keep the httpd process to a sane size.


Customer testing this said,

 "httpd starts off with about 15m of resident memory and slowly grows to 70-80m, at which time the process is retired and a new one started.  So this seems to be a good workaround."

--- Additional comment from msuchy@redhat.com on 2009-06-05 13:01:04 EDT ---

Commit 866cfc28694446faea177c3eb2e74a545b658bff for WebUI installer.
Commit 501703a251702723c320a9ee46551b171ab28f5b for CLI installer.
Comment 1 Miroslav Suchý 2009-06-05 13:06:14 EDT
Please add this patch to your hosted code:

http://git.fedorahosted.org/git/?p=spacewalk.git;a=blobdiff;f=web/html/applications/rhn-proxy/5.3/httpd/rhn_proxy.conf;fp=web/html/applications/rhn-proxy/5.3/httpd/rhn_proxy.conf;h=98a68f056e992a16824f0e2dcde3a824d212dc92;hp=56ecec7f84279974943c0fa8f00c431774dc8015;hb=d54d458fefa7e0adc1304be92cb73e410cb87aa0;hpb=c1b011443463f9ae3f4f9a2ce386005b1b49f27e

diff --git a/web/html/applications/rhn-proxy/5.3/httpd/rhn_proxy.conf b/web/html/applications/rhn-proxy/5.3/httpd/rhn_proxy.conf
index 56ecec7..98a68f0 100644
--- a/web/html/applications/rhn-proxy/5.3/httpd/rhn_proxy.conf
+++ b/web/html/applications/rhn-proxy/5.3/httpd/rhn_proxy.conf
@@ -1,6 +1,12 @@
 # ** DO NOT EDIT **
 # RHN Proxy handler configuration file
 
+
+<IfModule prefork.c>
+	# bug #503187
+	MaxRequestsPerChild  200
+</IfModule>
+
 # RHN Proxy Server location
 <LocationMatch "^/*">
     # this stanza contains all the handlers for RHN app code
Comment 2 James Bowes 2009-07-20 09:51:20 EDT
To Test:
 * install a 5.3 proxy (say, but following the testing steps for https://bugzilla.redhat.com/show_bug.cgi?id=502612)
 * verify that the MaxRequestsPerChild line is in rhn_proxy.conf in the httpd conf dir.
Comment 3 Denise Hughes 2009-07-21 10:54:36 EDT
Verified in webdev.

Testopia ID = 10869
https://testopia.devel.redhat.com/bugzilla/tr_show_case.cgi?case_id=10869
Comment 4 Denise Hughes 2009-07-31 16:34:05 EDT
Cannot verify in qa because of filer issues.  This was verified in dev.  Setting status to verified.

Note You need to log in before you can comment on or make changes to this bug.