From Bugzilla Helper:
User-Agent: Mozilla/5.0 (compatible; Konqueror/3.2; Linux) (KHTML, like Gecko)
Description of problem:
When the /var/log/messages file is significantly larger than RAM (EG 400M log file and 256M of RAM) the /etc/log.d/scripts/services/kernel script takes enough memory to make the machine thrash (in this case 500M of RAM). At 12:00 I noticed that the /etc/log.d/scripts/services/kernel script had been running since ~4AM and was showing no signs of completion.
Presumably if the messages file was 800M in size on a machine with 256M of RAM then the day's logs would take more than a day to process...
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Have iptables rules that block all outbound connections, then have someone login to your machine and run a portscanner for 8 hours that attempts connections as fast as possible.
Then wait for the cron job to run and use all memory in the machine and a lot of swap.
Expected Results: I expect that the cron job won't try and read a 400M log file into memory.
Hello, I try to reproduce your bug but I was not succesfull. My logwatch works
right. I test the last logwatch version (logwatch-6.1.2-1).
Could you please test this logwatch version.
system /bin/cat /var/log/messages 2>/dev/null
>/tmp/logwatch.9RJkbpug/sonicwall failed: 256 at /etc/cron.daily/00-logwatch
Above is an error I received while testing this with a data set that is larger
than the file system for /tmp.
I haven't been able to reproduce the original problem, it may have been fixed
so I'll open a new bugzilla if I can reproduce it.
For the moment please consider comment #2 as the only remaining on this
I can't reproduce your problem in comment 2. So I close this bug.