Description of problem: Upgraded to 3.2.23-rt37.56.el6rt.x86_64 but I'm still seeing the issue that was described here: https://bugzilla.redhat.com/show_bug.cgi?id=786083 that was said to have been fixed in errata RHSA-2012-1282 Version-Release number of selected component (if applicable): 3.2.23-rt37.56.el6rt.x86_64 How reproducible: everytime Steps to Reproduce: 1. Make a clean OS install 2. install kernel-rt 3. reboot into kernel-rt 4. issue this command: tcpdump let it run for a few seconds then CTRL^C 5. run tail /var/log/messages Actual results: nothing got logged Expected results: device eth0 entered promiscuous mode device eth0 left promiscuous mode Additional info: Same thing happens with iptables loggin as described into bz786083 sample iptables test lines: iptables -I INPUT 1 -m limit --limit 100/m --limit-burst 100 -j LOG --log-level 4 --log-ip-options --log-prefix "Test iptables logging: " issuing echo 5 > /proc/sysrq-trigger forces the kernel to get everything from dmesg and log it, but it stops right there, nothing new get logged after that... In the previous kernel version instead that work around was generally working for a few hours I'll try to wipe the machine, reinstall it from scratch and skip the 3.0.x upgrades as this test was done on an already up-to-data machine with minor changes to the minimal install
don't know if this might be useful... noticed that when pushing the system resulting in: "kernel: sched: RT throttling activated" logged also previous log lines get written to messages
The "kernel: sched: RT throttling activated" message means RT tasks have been running uninterruptedly, leaving no time for the non-RT tasks to run. As that may lead the system to an unconsistent state, there is a tunable failsafe that will preempt the RT tasks and let the non-RT tasks run for a giver period of time. The bandwidth limiting of RT tasks is controlled by: /proc/sys/kernel/sched_rt_period_us /proc/sys/kernel/sched_rt_runtime_us The default is a 950000us runtime with a period of 1000000us. So, if RT tasks have been running for more than 950ms in the last 1s, they are throttled so that non-RT tasks have a minimal CPU share. Writing "-1" to /proc/sys/kernel/sched_rt_runtime_us disables the bandwidth limitting. The other interesting datapoint is that if you are seeing this message you either have a CPU hog task running on your system (that could be startving other kernel threads -- leading to delays in logging and other tasks) or you have a very old version of _rtctl_ that is unaware of the kernel threads naming change in MRG-2.2. I suggest updating rtctl first and running the tests again, to confirm.
Hi, that "error" got written into the log while I was trying to stress test a machine (xeon e1230 (4xcore + 4xHT)) with 8 tasks running in realtime using "100% CPU" each one The issue I'm reporting here is that logging to syslog does not work (either from iptables and nor from other tools like network when using tcpdump in promiscuos mode)... the only time I saw logging working was during that specific "stress test". rtctl should be already up-to-date: rpm -qa rtctl rtctl-1.9-5.el6rt.noarch
tried on another server with a new install. and can confirm that issuing: iptables -I INPUT 1 -m limit --limit 100/m --limit-burst 100 -j LOG --log-level 4 --log-ip-options --log-prefix "Test iptables logging: " results in nothing being logged to /var/log/messages
Could you please run a test? When ran the iptables command line you suggested, I saw the data on dmesg but noticed it was not reproduced on the console. When I ran the command line below, during the time it took to complete the command, there was output from iptables at the console. The command line was: cat /proc/consoles
yes, can confirm that issuing: cat /proc/consoles resulted in data being logged while the task was running. That's similar to what happens when pushing the system till the "RT throttling activated" kicks in.
Created attachment 617625 [details] fix printk flush of messages Thanks Steven Rostedt for pointing me out to this patch sent by Frank Rowand. This patch indeed solves the issue. We are just verifying whether we will be able to release a hotfix kernel containing this change or it will be part of the next Errata kernel. ---
It will be released soon as a hotfix kernel.
sounds great let me know if I've to do some betatesting, got a few spare machines I can experiment on. thank you, best regards
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-1491.html