From Bugzilla Helper: User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Q312461) Description of problem: Once apache logfile reaches 2GB in size, it begins to accumulate errors like the following: child pid 29561 exit signal File size limit exceeded (25) Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Pad/create logfile to just under 2GB 2. Start Apache and create traffic to produce more logs 3. Tail main error log for errors Actual Results: On a heavy traffic server, once the log threashold is reached, pages are still served but the excessive child exits cause it to be sluggish. Expected Results: There is no longer a 2GB filesize limit for Linux under 2.4 kernels. Is there a way to get Apache to recognize and log beyond 2GB? Additional info: If httpd should receive a HUP, USR1, or undergo a complete restart, httpd will fail to start with the following error in the virtual's error log: [error] (27)File too large: could not open transfer log file /var/log/httpd/virt_combo.log. This 2GB limit is being reached even with daily log rotation. The site doesn't consitantly reach the 2GB mark, but they come close very often. I have since doubled log rotation for apache (twice per day) to avoid this problem.
Overlooked the Apache version info: Apache-1.3.22-2
This just occured to us at http://www.amdmb.com. Server slowed to a crawl and weird database errors (PHP & MySQL with vBulletin) started to occur where certain rows wouldn't UPDATE. Rotated the massive log out and everything is now back to normal.
I've had a similar problem with the error_log : When it reached 2GB, all httpd processes just died. I have logging turned of on many servers since both access_log (lots of traffic) and error_log (stupid debug info from perl modules) reached 2GB in less than a day. I thought it might have been a limit introduced when recompiling the source rpm to raise the hard server limit, but seeing this bugzilla entry, I guess not. Matthias
nalin, has this been fixed yet?
I doubt this is going to get fixed soon: an httpd binary compiled with large file support would become incompatible with any modules which weren't compiled with large file support. The workaround (to rotate logs more often!) is probably the only option in the short term.
*** Bug 68345 has been marked as a duplicate of this bug. ***
Would this not be a possible DoS attack? Theoretically you cannot depend on rotating "more often" because a hostile attacker could easily fill up the logs to the 2GB breaking point. This may be a DoS because when it hits the 2GB mark, server performance crumbles while sometimes processes die, and strange things occur with certain scripts where database updates do not occur, only INSERT. I can't explain why this occurs... I just know that these things happened to me.
Does Apache 2.0 in Limbo still have this restriction? Just wondering.
Well, I see that I'm not the only one this is a problem for. The workaround of rotating logs more often is obviously what I try to do, but when for whatever reason a logfile does get to 2GB, my expected behaviour would be to have apache stop logging to that file, but not crash! I'm absolutely sure that I've had some web servers running (maybe compiled from source?) that didn't crash when the 2GB limit was reached, as it sometime happened that I would discover it a week later (the most recent timestamp in the log being one wek old) with the server still running fine, only having log entries lost. I doulbt this crash is normal, and although I understand that it may not be possible to easily add support for > 2GB files, this should definitely be considered a bug and fixed as such!
Fair points; a better workaround to "rotating more often" would be to use piped logs + a rotatelogs process: the stock Apache rotatelogs will truncate the file and start again once a write files, which is certainly a better failure mode. The 2.0 package in Limbo has the same restrictions as 1.3 unfortunately; filing an RFE against that package would be appreciated.
My previous comment (which meant to say "when a write fails" not files) is not correct, I hadn't realized that the process segfaults when reaching the 2gb limit. Another option is to compile the rotatelogs binary with large file support (should be a simple case of adding "#define _FILE_OFFSET_BITS 64" before the #includes).
I posted an RFE on Apache's site linking to this bug report. Here's the URL: http://nagoya.apache.org/bugzilla/show_bug.cgi?id=11053
I actually meant to file an RFE in bugzilla.redhat.com against the httpd package, which I have now done as bug 69520.