Description of problem: Since upgrading to 3.0.9-rt26.46.el6rt.x86_64 any message that we want to LOG to syslog using iptables -j LOG chain is not logged Version-Release number of selected component (if applicable): iptables-1.4.7-4.el6.x86_64 kernel-rt-3.0.9-rt26.46.el6rt.x86_64 Steps to Reproduce: Add an iptables rule before everything: iptables -I INPUT 1 -m limit --limit 100/m --limit-burst 100 -j LOG --log-level 4 --log-ip-options --log-prefix "Test iptables logging: " verify that the rule got at least a hit: iptables -nvx -L INPUT check if something got written to /var/log/messages tail -f /var/log/messages Actual results: nothing from that rule gets logged to /var/log/messages Expected results: 100lines per minutes should be logged Additional info: on kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64 it was working as expected. on kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64 /etc/syslog.conf was the same as the current /etc/rsyslog.conf
I am not really sure whether this is a workaround to mitigate the issue or the real solution: From /etc/sysconfig/rsyslog: # Options to syslogd # syslogd options are deprecated since rsyslog v3 # if you want to use them, switch to compatibility mode 2 by "-c 2" SYSLOGD_OPTIONS="-c 4" So, in order to get the old syslogd behavior, just modify the line above so that it reads: SYSLOGD_OPTIONS="-c 2" And restart rsyslogd.
Hi, Sorry I wasn't aware of such change into the syslogd behaviour :/ Thanks for pointing me to the change needed to retain compatibility with the setup that I was using with the previous kernels
(In reply to comment #2) Hi, Did the suggestion from comment 1 really work? If so, the issue should have been present for some time, not just after upgrading the kernel. Or was rsyslog upgraded as well? (From which version?) What is your current version of the rsyslog package? Using "-c 2" is a workaround if you used a pre-rsyslog-v3 config file, but you should adjust your configuration for the newer versions. Basically you need to add $ModLoad imklog # provides kernel logging support to /etc/rsyslog.conf. and keep "-c 4" in SYSLOGD_OPTIONS. See http://www.rsyslog.com/doc/v3compatibility.html and the stock rsyslog.conf for other details.
Thomas, thanks for looking at this issue. We have enabled the imklog plugin and also tested all of the related config parameters that could have some effect on our tests, always restarting rsyslogd, as suggested on the manpage and manuals. To no avail. The messages (that were in /proc/kmsg) never reached /var/log/messages. When I set the compatibility mode, things started working. I have inspected the kernel side and I am sure that, even though there were a few changes in the ipt_LOG routines, it is in the end a printk("%s\n"). So, no real changes there. Please keep in mind that we started the tests from a fresh configuration (it was the file created at install time) and then started adding the options to be tested.
Hi, I had not yet tested the suggested workaround as I reverted all the machines to the 2.6.33.9-rt31.75.el6rt.x86_64 kernel as with that one I'm not experiencing this specific issue (just booting the machine into the 2.6.33.9 fixes the issue without any additional change needed on the filesystem) The only upgraded packages were kernel-rt and kernel-rt-firmware. Nothing else was changed. Currently running rsyslog-4.6.2-12.el6.x86_64 /etc/rsyslog.conf is already a v3 config file and the line: $ModLoad imklog # provides kernel logging support is already there. /etc/rsyslog.conf appear to be as it is now since the day I did the initial install
I can confirm that the recently released 3.0.18-rt34.53.el6rt.x86_64 has the same issue. setting SYSLOGD_OPTIONS="-c 2" "fixes" this specific issue but I do not know if that change could break something else as I've been using -c 4 since years...
I've to correct myself... "-c 2" appear to fix the issue but only by logging to /proc/kmsg and the "fix" works only for a limited time... after a certain uptime has passed it stops logging completly. additionally even with "-c 2" /var/log/messages still does not get populated by most messages (not just iptables related... even messages related to interfaces going into promiscuous mode does not get logged)
Additionally: it has not been mentioned untill now but on dmesg everything is appearing properly... it just not getting into /var/log/messages
Created attachment 568658 [details] imklog patch (In reply to comment #7) > I've to correct myself... "-c 2" appear to fix the issue but only by logging to > /proc/kmsg Sorry, I don't get the "logging to /proc/kmsg" part. > and the "fix" works only for a limited time... after a certain > uptime has passed it stops logging completly. Logging what? (kernel messages?) Logging where? Does the daemon stop responding at all? Do these produce any output: # logger asdf # echo m > /proc/sysrq-trigger What "-c 2" basically does, in regard to kernel messages, is that it adds "$modload imklog" to the configuration, i.e. it should have no effect. > additionally even with "-c 2" /var/log/messages still does not get populated by > most messages (not just iptables related... even messages related to interfaces > going into promiscuous mode does not get logged) On the off chance this might be relevant, some messages can get lost because of rate-limiting. There has recently been a bugfix in the upstream imklog code [1], that may or may not help. Could you please try the attached patch? [1] rsyslog-4.6.2-imklog-fac-decode.patch
doing a tail -f /proc/kmsg reported all the log lines that were missing from /var/log/messages. Rate limiting shouldn't interfere in my case since I'm generally logging less then 10 line per hour. I'm not sure on how to apply that patch, I'll try to setup a testmachine in order to do apply it and recompile the kernel. Currently I'm back to 2.6 kernels as I cannot use 3.x without proper logging working :( Thank you, best regards
(In reply to comment #10) > doing a > tail -f /proc/kmsg > reported all the log lines that were missing from /var/log/messages. If you run rsyslog and 'tail -f /proc/kmsg' simultaneously, there's a race and one will get part of the messages and the other will get the rest. > I'm not sure on how to apply that patch, I'll try to setup a testmachine in > order to do apply it and recompile the kernel. It's a patch for rsyslog, no need to recompile the kernel. I'll try to build the package later and put a link to it here. Still, when the logging stops (even with the partial workaround), does sysrq produce any messages that make it to the logs?
When rsyslog stops adding messages to /var/log/messages you can still see them being added to dmesg and to /proc/kmsg. Much as if after the initial data is all read from /proc/kmsg imklog no longer notices there is new data being added. Tomas, I will apply the patch you mentioned and run some tests. If I get positive results I will send the modified RPM pkg to evcz. Also, as an interesting data point, RHEL6.2 + the kernel-rt-vanilla-3.0.18-rt34.53 (meaning the rt kernel base code with all the RT features disabled) works fine for this test. If I use the kernel-rt-3.0.18-rt34.53 I see the problems evcz described. On Fedora 16, (rsyslog-5.8.7-1) the kernel-rt works just fine for this test.
Tomas, I have applied the rsyslog-4.6.2-imklog-fac-decode.patch and saw no positive change. In fact, now I don't see any of those few ipt_LOG lines I was used to see before rsyslog "stopped" reading from /proc/kmsg. Any other ideas? I am testing a few theories in the RT kernel as well, e.g. is the RT kernel returning some different error code to announce there is no more data to be read?. But so far, I have found nothing.
yeah, it's very strange that after the boot the logging is working properly for a few seconds and then it stops...
Evcz, are you booting your kernel with 'earlyprintk' and/or 'force_early_printk'? I was using both options in my tests. Once I removed it and rebooted the kernel, the test works like a charm. Could you please verify that?
I do not have those those options into the grub config: kernel /vmlinuz-3.0.18-rt34.53.el6rt.x86_64 ro root=UUID=xxxxxxxx-48ec-xxxx-a5b4-xxxx1d38xxxx rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM Should I try to add them? Is there another location where those options might have been set?
The kernel command line in grub.conf is the place for these options. If they are not there, they are not in use. Would you mind booting the box adding 'debug ignore_loglevel' and removing 'quiet' from the command line? Just to test another theory. Thanks again for promptly helping to debug this issue.
Adding 'debug ignore_loglevel' made it working and everything appears to get logged properly! 14:24:45 up 6 min, 1 user, load average: 0.04, 0.07, 0.05 6minutes passed and I'm still getting logs into /var/log/messages... looks like you got it :) (in the past I tried setting different "--log-level" values into the iptables rule but none of them appeared to be working)
Great! Thanks for testing that theory. What's your default console log level? The 'ignore_loglevel' parameter just tells the system to log everything, ignoring the level (info, warn, debug, ...) it has. The parameter 'quiet' should just silence the boot messages but not change the console log_level. That said, I have the impression the issue now is to understand what is the console log_level you are using and why is it not logging the test (that has the log_level 4).
First of all I can confirm that it works ok even when re-adding 'quiet' as for the default log level it should be 4 on both the kernels... [root@pink ~]# uname -r ; cat /proc/sys/kernel/printk 2.6.33.9-rt31.79.el6rt.x86_64 4 4 1 7 [root@black ~]# uname -r ; cat /proc/sys/kernel/printk 3.0.18-rt34.53.el6rt.x86_64 10 4 1 7 I do not remember to have ever modified those settings. I'm now going to boot into 3.0.18 without 'debug ignore_loglevel' and will try to see if changing those loglevels makes a difference
I cannot explain what happened. Now I removed 'debug ignore_loglevel', rebooted and everything is getting logged properly. I absolutely didn't do any change. I'm not able to reproduce the problema at the moment. Currently: [root@black ~]# uname -r ; cat /proc/sys/kernel/printk 3.0.18-rt34.53.el6rt.x86_64 4 4 1 7 I'll try to do a clean install on the box again within a few hours
Interesting... in my case the issue was the earlyprintk related parameters that were supressing the log messages. I have the impression it could be the console log level in your case that was getting in the way.
No, ok... now at 18minutes of uptime the problem reappeared Going back to 'debug ignore_loglevel' and will leave it running for half an hour
'debug ignore_loglevel' is presente, uptime 10minutes... logging stopped... I'm out of ideas at the moment :|
another test case to check if the logging is working properly: boot the box into 3.0.18-rt as soon as it gets online run tcpdump then stop it immediately with ctrl+c tail /var/log/messages will be populated with something like: Mar 13 15:57:24 black kernel: device eth0 entered promiscuous mode Mar 13 15:57:25 black kernel: device eth0 left promiscuous mode wait 20minutes, do the same... this time the lines are not being logged (but they are making their way into dmesg) PS: running logger from command line works correctly even after 20 minutes have passed
Thanks again for the new data points. I believe I have enough to write a few tests and debug the possible guilty parts. I hope to have news soon.
let me know if you need me to do any kind of test as currently I've 3 empty boxes online just for testing purposes
I am investigating the issue. Most of my initial work was dismissed by this little detail: If you run 'echo 5 > /proc/sysrq-trigger' the log that was not working or stopped after a while gets back to life. This command sets the console log level to 5, meaning that any messages with values equal or minor than 5 will be printed (lower value means higher priority for the message). That detail was a surprise because the default console log level is '7', as can be seen in the first value presented at /proc/sys/kernel/printk. Anyway, in the meantime, this is a simple workaround.
Hi, I tried that "echo" but it didn't worked: [root@gold ~]# uname -r 3.0.18-rt34.53.el6rt.x86_64 [root@gold ~]# cat /proc/sys/kernel/printk 4 4 1 7 [root@gold ~]# echo 5 > /proc/sysrq-trigger [root@gold ~]# cat /proc/sys/kernel/printk 5 4 1 7 after 20minutes it not logging anymore (but messages are appearing correctly into dmesg)
looks like next version will be "3.2.14-rt24.22.el6rt.x86_64" so we can stop doing experiments on 3.0.18? :)
I just gave a try to 3.0.25-rt44.57.el6rt.x86_64 and can confirm the issue is still present A lot of messages are not relayed/picked up by syslog ... not only iptables messages but other messages as well
echo 5 > /proc/sysrq-trigger seems to be working on 3.0.25-rt44.57.el6rt.x86_64
Sorry for the delay in answering your questions. Yes, we plan on moving to 3.2.x-rt soon and that lowers (a bit) the priority of this issue. Meanwhile, working on another issue, I was remminded by a colleague that it could be a side-effect of an earlier implementation of the dmesg restrictions (created to fix a CVE). Could you please verify the contents of /proc/sys/kernel/dmesg_restrict? # cat /proc/sys/kernel/dmesg_restrict If it is set to "1", please try setting it to "0" and repeating your test. Thanks again!
Hi, thanks for your reply. Looks like it is already set to 0 [root@blue ~]# uname -r 3.0.25-rt44.57.el6rt.x86_64 [root@blue ~]# cat /proc/sys/kernel/dmesg_restrict 0 by the way the workaround you posted above: echo 5 > /proc/sysrq-trigger is working only if I issue it *after* syslogd "stopped working" So to get it working "as expected" on 3.0.25-rt44.57.el6rt.x86_64 I boot it, wait ~20minutes and then issue "echo 5 > /proc/sysrq-trigger"
tried 3.0.30-rt50.62.el6rt.x86_64 : problem still present
We will address this BZ in our upcoming MRG 2.2 release, which will contain a 3.2.x-rt kernel.
Thanks Jiri for verifying that. The test worked flawlessly for 1h+. Judging for the previously observed behavior, that's a win.
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Cause: This issue was originally reported on mrg-2.1 and is not present on mrg-2.2 kernels. Consequence: The original issue prevented messages from iptables (and probably other kernel subsystems) from beying logged into syslog (rsyslog). Fix: No fix was necessary as the issue was not observed on mrg-2.2 kernels Result: messages from iptables (namely ipt_LOG) are being recorded into syslog.
Does anyone have an idea what fixed it?
Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1,4 +1 @@ -Cause: This issue was originally reported on mrg-2.1 and is not present on mrg-2.2 kernels. +Cause: This issue was originally reported on mrg-2.1 and is not present on mrg-2.2 kernels. Consequence: The original issue prevented messages from iptables (and probably other kernel subsystems) from beying logged into syslog (rsyslog). Fix: No fix was necessary as the issue was not observed on mrg-2.2 kernels Result: messages from iptables (namely ipt_LOG) are being recorded into syslog.-Consequence: The original issue prevented messages from iptables (and probably other kernel subsystems) from beying logged into syslog (rsyslog). -Fix: No fix was necessary as the issue was not observed on mrg-2.2 kernels -Result: messages from iptables (namely ipt_LOG) are being recorded into syslog.
kernel-rt-3.0.36-rt57 just got released: is the issue still present so I should wait for mrg-2.2 to upgrade?
Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1 +1,4 @@ -Cause: This issue was originally reported on mrg-2.1 and is not present on mrg-2.2 kernels. Consequence: The original issue prevented messages from iptables (and probably other kernel subsystems) from beying logged into syslog (rsyslog). Fix: No fix was necessary as the issue was not observed on mrg-2.2 kernels Result: messages from iptables (namely ipt_LOG) are being recorded into syslog.+Cause: This issue was originally reported on mrg-2.1 and is not present on mrg-2.2 kernels. +Consequence: The original issue prevented messages from iptables (and probably other kernel subsystems) from beying logged into syslog (rsyslog). +Fix: No fix was necessary as the issue was not observed on mrg-2.2 kernels +Result: messages from iptables (namely ipt_LOG) are being recorded into syslog.
Is there any news about this? I'm currently trying 3.0.36-rt57.66.el6rt.x86_64 but still having this specific issue and the workaround (echo 5 > /proc/sysrq-trigger) on certain boxes needs to be done very often (every 8hours) on others instead it sticks. Is there any roadmap or some planning publicy available?
MRG 2.2 is scheduled to be released September 19th. This release is based on a 3.2 kernel and should fix the syslog issue.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-1282.html