Bug 1441552
Summary: | kernel BUG at kernel/sched/rt.c:2021! | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Pei Zhang <pezhang> |
Component: | kernel-rt | Assignee: | Daniel Bristot de Oliveira <daolivei> |
kernel-rt sub component: | Other | QA Contact: | Pei Zhang <pezhang> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | urgent | ||
Priority: | urgent | CC: | bhu, chayang, crwood, daolivei, hhuang, juzhang, lcapitulino, lgoncalv, michen, mtosatti, ovasik, pezhang, williams |
Version: | 7.4 | Keywords: | Regression |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-08-01 19:05:29 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1175461, 1353018 |
Description
Pei Zhang
2017-04-12 08:33:43 UTC
Update: Currently, QE didn't hit this issue any more. The versions we used in daily testing(With multiple hosts, multiple times): 3.10.0-653.rt56.571.el7.x86_64 3.10.0-663.rt56.581.el7.x86_64 With same steps, QE hit rt host hang issue. And I tested below version about 3 times currently, hit this issue 1 time. Versions: kernel-rt-3.10.0-664.rt56.583.el7. But this time I didn't open the console before hang, so can not provide the detail error info. I'll keep monitoring console in next testings. I have a few questions: - Do you start ovs before activating the profile? If yes, do you assigned fifo priority to the PMD threads? - Is this always reproduced when activating the profile or is it reproduced at different times? - Are you able to reproduce this with tuned-2.8.0-2.el7 or later? (In reply to Luiz Capitulino from comment #6) > I have a few questions: > > - Do you start ovs before activating the profile? If yes, do you assigned > fifo priority to the PMD threads? No, I didn't start ovs before activating the tuned profiles. Only install the ovs/dpdk packages for next testing. > - Is this always reproduced when activating the profile or is it reproduced > at different times? Currently, I hit this host kernel crash only one time. In my latest kvm-rt related testing, I didn't hit this issue anymore. Besides hit hang issue[Comment 5] one time. > - Are you able to reproduce this with tuned-2.8.0-2.el7 or later? In my daily testing, I already tested with tuned-2.8.0-2.el7/tuned-2.8.0-3.el7.noarch, didn't hit this issue again. Seems this issue is very occasionally. But seems the crash info is similar with https://bugzilla.redhat.com/show_bug.cgi?id=1438120#c53 Best Regards, Pei Indeed! bug 1438120 comment 53 is precisely the same issue and it's nice you got a good vmcore! Would you mind to post that info here too? So, what seems to be happening here is: 1. Something in user-space in changing testpmd pinning and this code path is executed: sched_setaffinity() set_cpu_allowed_ptr() do_set_cpus_allowed() set_cpus_allowed_rt() 2. set_cpus_allowed_rt() checks weight <= 1 and goes to decrement rt_nr_migratory. However, rt_nr_migratory is already zero. So the BUG_ON() triggers Now, some important points: - rt_nr_migratory keeps track of how many RT tasks in the runqueue can migrate to other CPUs. If rt_nr_migratory=0 it means no RT task can migrate - nr_cpus_allowed=5 for the testpmd thread - observe in the trace that a few pagefaults are happening in parallel to set_cpus_allowed_rt(), and those code paths are calling migrate_enable() This points to a race with migrate_enable()/disable(), but I can't see it causing rt_nr_migratory being underflowed. Update: I hit the kernel panic issue again, at the first reboot after installing rt related packages. 1.Install rhel7.4 host 2.Install kernel-rt and virt related packages, including: 3.10.0-668.rt56.587.el7.x86_64 tuned-2.8.0-3.el7.noarch rt-tests-1.0-11.el7.x86_64 qemu-kvm-rhev-2.9.0-5.el7.x86_64 libvirt-3.2.0-5.el7.x86_64 3. Reboot host, then host hang and kernel panic, there was no crash dump file exists, below is the console output. [ 61.983318] ------------[ cut here ]------------ [ 61.983319] kernel BUG at kernel/sched/rt.c:2022! [ 61.983322] invalid opcode: 0000 [#1] PREEMPT SMP [ 61.983343] Modules linked in: fuse btrfs raid6_pq xor vfat msdos fat ext4 mbcache jbd2 xt_CHECKSUM iptable_mangled [ 61.983353] CPU: 0 PID: 11844 Comm: taskset Not tainted 3.10.0-668.rt56.587.el7.x86_64 #1 [ 61.983354] Hardware name: Dell Inc. PowerEdge R430/0CN7X8, BIOS 2.0.1 04/11/2016 [ 61.983355] task: ffffa11817a650f0 ti: ffffa117eff80000 task.ti: ffffa117eff80000 [ 61.983362] RIP: 0010:[<ffffffff950ce50d>] [<ffffffff950ce50d>] set_cpus_allowed_rt+0x10d/0x110 [ 61.983363] RSP: 0018:ffffa117eff836e8 EFLAGS: 00010046 [ 61.983364] RAX: 0000000000000000 RBX: ffffa11817a650f0 RCX: 0000000000000000 [ 61.983364] RDX: 0000000000000000 RSI: 00000000000000c0 RDI: 0000000000000000 [ 61.983365] RBP: ffffa117eff83700 R08: 0000000000000001 R09: 0000000000000003 [ 61.983365] R10: ffffa117eff83a08 R11: 0000000000000246 R12: ffffa1181e619080 [ 61.983366] R13: ffffa117eff83a08 R14: 00000000ffffffea R15: ffffa11817a650f0 [ 61.983367] FS: 00007f4998c3d740(0000) GS:ffffa1181e600000(0000) knlGS:0000000000000000 [ 61.983368] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 61.983368] CR2: 00007f499872a030 CR3: 000000084d708000 CR4: 00000000001407f0 [ 61.983369] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 61.983370] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 61.983370] Stack: [ 61.983371] ffffa11817a650f0 ffffa117eff83a08 ffffa117eff83a08 ffffa117eff83720 [ 61.983372] ffffffff950bed00 ffffa1181e619080 ffffa11817a659d0 ffffa117eff83778 [ 61.983373] ffffffff950bff4b 0000000000000282 ffffa117eff83cc0 ffffa117eff83750 [ 61.983373] Call Trace: [ 61.983378] [<ffffffff950bed00>] do_set_cpus_allowed+0x30/0x70 [ 61.983380] [<ffffffff950bff4b>] set_cpus_allowed_ptr+0x13b/0x1f0 [ 61.983382] [<ffffffff950c0211>] sched_setaffinity+0x211/0x250 [ 61.983384] [<ffffffff950ba2ff>] ? migrate_enable+0xdf/0x1a0 [ 61.983389] [<ffffffff9518eda7>] ? get_page_from_freelist+0x647/0xb00 [ 61.983391] [<ffffffff950ba2ff>] ? migrate_enable+0xdf/0x1a0 [ 61.983394] [<ffffffff951aedfe>] ? pagefault_enable+0x1e/0x20 [ 61.983396] [<ffffffff9518ecb4>] ? get_page_from_freelist+0x554/0xb00 [ 61.983398] [<ffffffff9518f3f4>] ? __alloc_pages_nodemask+0x194/0xab0 [ 61.983400] [<ffffffff95194022>] ? __lru_cache_add+0xb2/0xf0 [ 61.983402] [<ffffffff9518eda7>] ? get_page_from_freelist+0x647/0xb00 [ 61.983403] [<ffffffff950ba2ff>] ? migrate_enable+0xdf/0x1a0 [ 61.983405] [<ffffffff951aedfe>] ? pagefault_enable+0x1e/0x20 [ 61.983407] [<ffffffff9518ecb4>] ? get_page_from_freelist+0x554/0xb00 [ 61.983410] [<ffffffff950a5a5e>] ? find_task_by_pid_ns+0xe/0x30 [ 61.983412] [<ffffffff950c02aa>] SYSC_sched_setaffinity+0x5a/0xc0 [ 61.983415] [<ffffffff950c031e>] SyS_sched_setaffinity+0xe/0x10 [ 61.983420] [<ffffffff956b9409>] system_call_fastpath+0x16/0x1b [ 61.983430] Code: 00 00 49 8b 84 24 e8 07 00 00 48 85 c0 74 1c 48 83 e8 01 49 89 84 24 e8 07 00 00 eb 92 41 c7 84 [ 61.983432] RIP [<ffffffff950ce50d>] set_cpus_allowed_rt+0x10d/0x110 [ 61.983433] RSP <ffffa117eff836e8> [ 62.403440] ---[ end trace 0000000000000002 ]--- [ 62.451894] Kernel panic - not syncing: Fatal exception [ 63.481661] Shutting down cpus with NMI [ 63.482072] Kernel Offset: 0x14000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfff) (In reply to Luiz Capitulino from comment #9) > Indeed! bug 1438120 comment 53 is precisely the same issue and it's nice you > got a good vmcore! Would you mind to post that info here too? > OK. When doing ping-pong live migration in rt environment, setting ovs pmd threads fifo:1 in host, and boot dpdk's testpmd with using fifo:1 in guest. After several (around 5 times mostly) migrtions, guest will kernel panic which is same with this bug. Key steps: # chrt -fp 1 $ovs_pmd_threads # chrt -f 1 /usr/bin/testpmd ... In latest testing, I enabled tracepoints as Marcelo suggested in bug 1438120 comment 58. So below link includes the vmcore and related trace.txt. http://fileshare.englab.nay.redhat.com/pub/section2/coredump/var/crash/pezhang/bug1441552/may18/ Versions with this time testing: 3.10.0-668.rt56.587.el7.x86_64 tuned-2.8.0-3.el7.noarch dpdk-16.11-4.el7fdp.x86_64 openvswitch-2.6.1-18.git20161206.el7fdp.x86_64 libvirt-3.2.0-5.el7.x86_64 Thanks, Pei I didn't have much time to work on this today. I'll be back to this tomorrow. My plan is to try to understand better what's wrong and maybe try to reproduce it using the installation test. Can you send me your /proc/cmdline and lscpu output? I guess the number of CPUs you're isolating may be making the issue more likely to trig. (In reply to Luiz Capitulino from comment #16) > I didn't have much time to work on this today. I'll be back to this > tomorrow. My plan is to try to understand better what's wrong and maybe try > to reproduce it using the installation test. > Can you send me your /proc/cmdline and lscpu output? # cat /proc/cmdline BOOT_IMAGE=/vmlinuz-3.10.0-669.rt56.588.el7.x86_64 root=/dev/mapper/rhel_dell--per430--09-root ro crashkernel=auto rd.lvm.lv=rhel_dell-per430-09/root rd.lvm.lv=rhel_dell-per430-09/swap console=ttyS0,115200n81 default_hugepagesz=1G iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,19,17,16,15 nohz=on nohz_full=2,4,6,8,10,12,14,16,18,19,17,16,15 rcu_nocbs=2,4,6,8,10,12,14,16,18,19,17,16,15 intel_pstate=disable nosoftlockup # lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 20 On-line CPU(s) list: 0-19 Thread(s) per core: 1 Core(s) per socket: 10 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz Stepping: 2 CPU MHz: 2297.299 BogoMIPS: 4594.59 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 25600K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln pts > I guess the number of CPUs you're isolating may be making the issue more > likely to trig. OK. Thanks, Pei Reproduce update: Same step with Description. Versions: 3.10.0-638.rt56.553.el7.x86_64 Result: Tried 3 times and reboot 3 times, didn't hit any issue. I'll try kernel-rt-3.10.0-647.rt56.564.el7 next. I was able to create a reproducer for this issue! By analyzing the vmcore, I was able to find out the sequence of events that leads to the panic. And you were right, it really happens at profile activation time. Before getting to the details, here's the reproducer: 1. Confirm that you don't have the realtime-virtual-host profile applied. You can check /proc/cmdline for this. If you do have the profile activated, change to another profile and reboot: # tuned-adm profile desktop # reboot 2. Once you don't have realtime-virtual-host parameters in /proc/cmdline, run the following script: #!/bin/bash sync cd /usr/lib/tuned/realtime-virtual-host while true; do tuned-adm profile desktop rm -f lapic_timer_adv_ns lapic_timer_adv_ns.cpumodel echo activating host profile tuned-adm profile realtime-virtual-host echo done sleep 1s done This script trigs the issue for me most of the time in less than 10 minutes. However, once or twice I had to restart the machine and run the script again. Now, the line from the tuned profile that's triggering the issue is this one: chrt -f 1 taskset -c $1 $QEMU -enable-kvm -device pc-testdev \ -device isa-debug-exit,iobase=0xf4,iosize=0x4 \ -display none -serial stdio -device pci-testdev \ -kernel /usr/share/qemu-kvm/tscdeadline_latency.flat \ -cpu host | grep latency | cut -f 2 -d ":" > $dir/out This is from the run-tscdeadline-latency.sh script executed by the realtime-virtual-host profile. That explains why you had to reprovision the machine to reproduce this and also why I wasn't being able to reproduce it before: run-tscdeadline-latency.sh implements caching and it should execute only once on a system (the reproducer above remove the caching files, forcing the script to execute on every run). I still don't know what causes this. I'm building a debug kernel to gather more information. (In reply to Luiz Capitulino from comment #19) > I was able to create a reproducer for this issue! > > By analyzing the vmcore, I was able to find out the sequence of events that > leads to the panic. And you were right, it really happens at profile > activation time. > > Before getting to the details, here's the reproducer: > > 1. Confirm that you don't have the realtime-virtual-host profile applied. > You can check /proc/cmdline for this. If you do have the profile activated, > change to another profile and reboot: > > # tuned-adm profile desktop > # reboot > > 2. Once you don't have realtime-virtual-host parameters in /proc/cmdline, > run the following script: > > #!/bin/bash > > sync > cd /usr/lib/tuned/realtime-virtual-host > > while true; do > tuned-adm profile desktop > rm -f lapic_timer_adv_ns lapic_timer_adv_ns.cpumodel > echo activating host profile > tuned-adm profile realtime-virtual-host > echo done > sleep 1s > done > > This script trigs the issue for me most of the time in less than 10 minutes. > However, once or twice I had to restart the machine and run the script again. Thanks Luiz. I can also reproduce this issue with this script. Now it's much easier to reproduce and I'll continue test lower versions to check if it's regression. Thanks, Pei Hi Luiz, Seems this is a regression bug. Tested with below versions using Comment 19. kernel-rt-3.10.0-604.rt56.517.el7 is the first kernel to hit this issue. kernel-rt-3.10.0-647.rt56.564.el7 fail kernel-rt-3.10.0-638.rt56.553.el7 fail kernel-rt-3.10.0-620.rt56.535.el7 fail kernel-rt-3.10.0-610.rt56.524.el7 fail kernel-rt-3.10.0-605.rt56.519.el7 fail kernel-rt-3.10.0-604.rt56.517.el7 fail kernel-rt-3.10.0-603.rt56.515.el7 work (running about 15 minutes, works well) kernel-rt-3.10.0-602.rt56.514.el7 work (running about 15 minutes, works well) kernel-rt-3.10.0-600.rt56.512.el7 work (running 1h, works well) kernel-rt-3.10.0-514.rt56.420.el7 work( running 1h, works well) Thanks, Pei Thanks Pei. I thought I had an hypothesis for this issue but if I was right it shouldn't be a regression. I'll take a look at -604. Actually, my hypothesis matches exactly with your findings! This upstream commit: commit 55c041cf3750894df6f304ad0489c777e27402e7 Author: Sebastian Andrzej Siewior <bigeasy> Date: Thu Jan 21 15:58:56 2016 +0100 sched: fixup migrate disable (all tasks were bound to CPU0) Which was backported via bug 1423432, replaces all p->nr_cpus_allowed usage to a function called tsk_nr_cpus_allowed(). This function returns 1 when migration is disabled (meaning, "one CPU allowed"). Now, take a look at what is done in inc_rt_migration(): rt_nr_migratory will only get updated when nr_cpus_allowed > 1. The following case should be possible: 1. rt_nr_migratory=0 2. Migration is disabled in some code path (eg. pagefault) 3. inc_rt_migration() is called. rt_nr_migratory increment is skipped since tsk_nr_cpus_allowed() will return 1 4. Migration is enabled 5. set_cpus_allowed_rt() is called from sched_setaffinity() code path 6. set_cpus_allowed_rt() should get weight=1 and tsk_nr_cpus_allowed() should return > 1 since migration is now enabled 7. BUG_ON() triggers Note that this problem should also be present on upstream, since the code is the same. However, upstream doesn't have the BUG_ON() anymore so the problem is less severe (probably rt_nr_migratory gets unbalanced). I guess the solution is just to change set_cpus_allowed_rt() to check p->nr_cpus_allowed directly instead of using tsk_nr_cpus_allowed(). But this is a discussion to have on upstream. For downstream, I think the best action right now is to revert the series for bug 1423432. I'll find a way to show that the problem exists on upstream and post my patch there. Then, once it's accepted or another solution recommended, we can backport the series for bug 1423432 along with the fix for this issue. Daniel is taking up this work, so I'm re-assining this to him. The patch below is the solution I mentioned on comment 23. Although I haven't tested it yet. If this patch solves the issue, then the issue should also exist on upstream although it will be harder to show as it doesn't have the BUG_ON(). diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 35d6eac..1a38d3f 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -300,7 +300,7 @@ static void inc_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq) rt_rq = &rq_of_rt_rq(rt_rq)->rt; rt_rq->rt_nr_total++; - if (tsk_nr_cpus_allowed(p) > 1) + if (p->nr_cpus_allowed > 1) rt_rq->rt_nr_migratory++; update_rt_migration(rt_rq); @@ -317,7 +317,7 @@ static void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq) rt_rq = &rq_of_rt_rq(rt_rq)->rt; rt_rq->rt_nr_total--; - if (tsk_nr_cpus_allowed(p) > 1) + if (p->nr_cpus_allowed > 1) rt_rq->rt_nr_migratory--; update_rt_migration(rt_rq); Hi Pei, Could you please try this kernel build? https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=13311268 -- Daniel (In reply to Daniel Bristot de Oliveira from comment #29) > Hi Pei, > > Could you please try this kernel build? > > https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=13311268 > > -- Daniel Hi Daniel, This build still hit this panic issue. Please see related vmcore and vmcore-dmesg.txt from below link: http://fileshare.englab.nay.redhat.com/pub/section2/coredump/var/crash/pezhang/bug1441552/jun1/ Best Regards, Pei Daniel, just something I forgot to tell: I tried the patch from comment 28 a few days ago and it fixed the issue for me. I think the problem with upstream commit 55c041c is that it's not all places that you can replace p->nr_cpus_allowed with tsk_nr_cpus_allowed(). inc_rt_migration() and dec_rt_migration() is one of those places, there could be more. Hi Luiz, By only using p->nr_cpus_allowed, your patch completely ignores that the migrate_disable/enable actions. Although in the past this was used, at that time, the migrated disable also used to change the cpumask of tasks. But that is not true anymore. So I do not believe that that is the best solution for this bug. Hi Pei, Could you please collect data using the following kernel: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=13351615 ? It does not aim to fix the problem but to collect data about the problem. I added some trace_printks and a BUG_ON() in the dec_rt_migration(), so we can get the problem earlier. Before starting the test, please, run the following commands: # cat /sys/kernel/debug/tracing/available_events | grep setaff > /sys/kernel/debug/tracing/set_event # sysctl -w kernel.ftrace_dump_on_oops=1 This will generate a VMCORE, please ensure that I can download the vmcore... I couldn't download the last one (403 Forbidden). Finally, I tried to reproduce the problem locally, but the steps to reproduce did not drive me to the crash. Is there anything else I should do? Thanks in advance. -- Daniel Daniel, I'll explain my reasioning for the fix below. However, as this issue is a high severity regression and as we're late in the snapshot phase, I think the best action for this BZ right now is to revert the series for bug 1423432 and redo the backport (along with the fix for this issue) for 7.5 or 7.4.z. Otherwise, we risk shipping 7.4 with this issue. Regarding the fix, my thinking is: the semantics for tsk_nr_cpus_allowed(p) is "if migration is enabled, the number of CPUs this task can migrate to is nr_cpus_allowed. If migrate is disabled this task can't migrate to another CPU". The problem here is that this semantics doesn't apply to inc_rt_migration() and dec_rt_migration(), since I _guess_ those functions do global accounting of migratable tasks and are not affected by individual tasks being unable to be migrated in a short period of time. But of course, I could be wrong. Luis, I am considering revert the other BZ, as we talked yesterday. I am just trying not to just _guess_ that this is the best thing to do. -- Daniel (In reply to Daniel Bristot de Oliveira from comment #33) > Hi Pei, > > Could you please collect data using the following kernel: > > https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=13351615 > > ? > > It does not aim to fix the problem but to collect data about the problem. > > I added some trace_printks and a BUG_ON() in the dec_rt_migration(), so we > can get the problem earlier. > > Before starting the test, please, run the following commands: > > # cat /sys/kernel/debug/tracing/available_events | grep setaff > > /sys/kernel/debug/tracing/set_event > # sysctl -w kernel.ftrace_dump_on_oops=1 > > This will generate a VMCORE, please ensure that I can download the vmcore... > I couldn't download the last one (403 Forbidden). Hi Daniel, vmcore with this test build please refer to below link: http://fileshare.englab.nay.redhat.com/pub/section2/coredump/var/crash/pezhang/bug1441552/Jun7/ Sorry, I forgot to set the permission when upload the vmcore in Comment 30, I have update the permission now, please check. > Finally, I tried to reproduce the problem locally, but the steps to > reproduce did not drive me to the crash. Is there anything else I should do? In Comment 19, Luiz provided a reproducer, could you please try it? Best Regards, Pei Hi Pei, Thanks for uploading the vmcore, now I could precisely define the root cause and I have a possible fix. Although it worked fine while reproducing the case as described in the comment #19, could you please verify if the following build works for you as well? https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=13379519 I am polishing the patch & description now. Thanks in advance, -- Daniel Daniel, I just tried the reproducer on this kernel and I got a different panic. The trace is below, I have the vmcore. Let me know if you need it. [ 181.978335] ------------[ cut here ]------------ [ 181.978336] kernel BUG at kernel/sched/core.c:3551! [ 181.978338] invalid opcode: 0000 [#1] PREEMPT SMP [ 181.978356] Modules linked in: xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter sb_edac edac_core coretemp intel_rapl ipmi_devintf iTCO_wdt iTCO_vendor_support iosf_mbi kvm_intel kvm dcdbas irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ipmi_ssif pcspkr sg ipmi_si ipmi_msghandler mei_me shpchp mei lpc_ich acpi_power_meter wmi nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic crct10dif_pclmul crct10dif_common crc32c_intel ixgbe mdio mgag200 i2c_algo_bit drm_kms_helper dca tg3 syscopyarea [ 181.978361] sysfillrect sysimgblt ptp fb_sys_fops ttm drm i2c_core ahci libahci libata megaraid_sas pps_core dm_mirror dm_region_hash dm_log dm_mod [ 181.978362] CPU: 4 PID: 1327 Comm: irq/58-p1p1-TxR Not tainted 3.10.0-675.rt56.597.test.el7.x86_64 #1 [ 181.978363] Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.2.6 06/08/2015 [ 181.978364] task: ffff880075549030 ti: ffff880075550000 task.ti: ffff880075550000 [ 181.978369] RIP: 0010:[<ffffffff810ba3b4>] [<ffffffff810ba3b4>] migrate_disable+0x214/0x220 [ 181.978369] RSP: 0018:ffff880075553e18 EFLAGS: 00010246 [ 181.978370] RAX: 0000000000019080 RBX: ffff880075549030 RCX: 0000000000000020 [ 181.978370] RDX: ffff88046da80000 RSI: 0000000000000120 RDI: 0000000000000000 [ 181.978371] RBP: ffff880075553e20 R08: 0000000000000004 R09: 0000000000000004 [ 181.978371] R10: ffff8800755491b8 R11: ffff8802ebf8e758 R12: ffff88046973e000 [ 181.978372] R13: ffff880075549030 R14: ffff88046973e000 R15: ffffffff8112fc70 [ 181.978373] FS: 0000000000000000(0000) GS:ffff88046da80000(0000) knlGS:0000000000000000 [ 181.978373] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 181.978374] CR2: 00007f3fc7bf3000 CR3: 0000000461eaf000 CR4: 00000000001427e0 [ 181.978374] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 181.978375] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 181.978375] Stack: [ 181.978376] ffff88046a07da80 ffff880075553e30 ffffffff81086881 ffff880075553e58 [ 181.978377] ffffffff8112fc89 ffff88046a07da80 ffff880075549030 ffff880075549030 [ 181.978378] ffff880075553ec0 ffffffff811300ef ffff880075549030 0000000000000000 [ 181.978378] Call Trace: [ 181.978383] [<ffffffff81086881>] __local_bh_disable+0x31/0x40 [ 181.978386] [<ffffffff8112fc89>] irq_forced_thread_fn+0x19/0x70 [ 181.978388] [<ffffffff811300ef>] irq_thread+0x12f/0x180 [ 181.978389] [<ffffffff8112fd20>] ? wake_threads_waitq+0x40/0x40 [ 181.978391] [<ffffffff8112ffc0>] ? irq_thread_check_affinity+0x30/0x30 [ 181.978394] [<ffffffff810a9bcf>] kthread+0xcf/0xe0 [ 181.978396] [<ffffffff810a9b00>] ? kthread_worker_fn+0x170/0x170 [ 181.978399] [<ffffffff816b9198>] ret_from_fork+0x58/0x90 [ 181.978401] [<ffffffff810a9b00>] ? kthread_worker_fn+0x170/0x170 [ 181.978410] Code: 8b 53 10 8b 52 18 48 8b 14 d5 e0 42 b4 81 48 8b 8c 10 e8 07 00 00 48 8d 93 a0 06 00 00 31 c0 e8 23 d2 09 00 e9 c5 fe ff ff 0f 0b <0f> 0b 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 65 48 8b [ 181.978412] RIP [<ffffffff810ba3b4>] migrate_disable+0x214/0x220 [ 181.978412] RSP <ffff880075553e18> Hi Daniel, I hit same problem with Luiz. Also, there is another issue: When reboot the host manually, kdump also generate crash vmcore during the reboot process. So every reboot, there will be a vmcore generated. Please refer to below link to get the vmcore: http://fileshare.englab.nay.redhat.com/pub/section2/coredump/var/crash/pezhang/bug1441552/Jun8/ Best Regards, Pei Hi Pei, Here is a kernel without the changes from the BZ1423432. http://file.rdu.redhat.com/~daolivei/BZ1441552/ Could you test it? We decided to revert because the real fix will require some upstream discussion, and so we will leave for the next cycle. -- Daniel (In reply to Daniel Bristot de Oliveira from comment #40) > Hi Pei, > > Here is a kernel without the changes from the BZ1423432. > > http://file.rdu.redhat.com/~daolivei/BZ1441552/ > > Could you test it? Daniel, this build works well. - Running Reproducer in Comment 19 1 hours, works well. - Re-install host and prepare rt environment, also works well. > We decided to revert because the real fix will require some upstream > discussion, and so we will leave for the next cycle. > -- Daniel Thanks, Pei Hi, I posted the explanation of this bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1423432#c12 As the problem exists upstream and is related to the migrate_disable/enabled(), which is the subject of the other BZ, I will continue working on that BZ. Sounds like a plan? -- Daniel The suggested patches (commit reverts) were added to kernel-rt-3.10.0-680.rt56.604. Kernel build ongoing. (In reply to Daniel Bristot de Oliveira from comment #42) > Hi, > > I posted the explanation of this bug here: > > https://bugzilla.redhat.com/show_bug.cgi?id=1423432#c12 > > As the problem exists upstream and is related to the > migrate_disable/enabled(), which is the subject of the other BZ, I will > continue working on that BZ. > > Sounds like a plan? It absolutely does! Btw, I think it's a good idea to re-open bug 1423432 now that the revert is done. Update: Versions: 3.10.0-680.rt56.604.el7.x86_64 tuned-2.8.0-5.el7.noarch Install rt host 5 times, works well, no any error. (In reply to Pei Zhang from comment #47) > Update: > > Versions: > 3.10.0-680.rt56.604.el7.x86_64 > tuned-2.8.0-5.el7.noarch > > Install rt host 5 times, works well, no any error. so can you close that bz? Coming a bit late to this, I've also ran my test-case for a few hours and it worked fine. Base on Comment 47 and Comment 49, this bug has been fixed well. Thanks. Move status of this bug to 'VERIFIED'. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:2077 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:2077 |