Hide Forgot
Description of problem: This is a nested VM; host is T530/F23/4.4.6-301.fc23 kernel/qemu-kvm-2.4.1-8.fc23 Guest is f24 daily (dated 9th April). Crash occurred while installed the L2 guest (again with the f24 daily). I've done it a couple of times before and it's been OK; so not that repeatable. Additional info: reporter: libreport-2.6.4 NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kswapd0:34] Modules linked in: uinput xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_broute bridge stp llc ebtable_nat ebtable_filter ebtables ip6table_raw ip6table_mangle ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_security ip6table_filter ip6_tables iptable_raw iptable_mangle iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_security snd_hda_codec_generic kvm_intel kvm ppdev irqbypass snd_hda_intel crct10dif_pclmul crc32_pclmul snd_hda_codec ghash_clmulni_intel snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm joydev snd_timer virtio_balloon parport_pc acpi_cpufreq snd parport tpm_tis tpm soundcore i2c_piix4 binfmt_misc nfsd auth_rpcgss nfs_acl lockd grace sunrpc virtio_net virtio_console virtio_blk qxl crc32c_intel drm_kms_helper ttm serio_raw drm ata_generic virtio_pci virtio_ring virtio pata_acpi CPU: 1 PID: 34 Comm: kswapd0 Not tainted 4.5.0-302.fc24.x86_64 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.2-20150714_191134- 04/01/2014 task: ffff88007cbd3d00 ti: ffff880079b5c000 task.ti: ffff880079b5c000 RIP: 0010:[<ffffffff81127618>] [<ffffffff81127618>] smp_call_function_single+0xd8/0x130 RSP: 0018:ffff880079b5f800 EFLAGS: 00000202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000830 RDX: 0000000000000003 RSI: 00000000000000fb RDI: 0000000000000830 RBP: ffff880079b5f848 R08: fffffffffffffffe R09: 0000000000000001 R10: 0000000000000001 R11: 0000000000000001 R12: ffffffffa02d5000 R13: 0000000000000000 R14: 0000000000000000 R15: ffff880079ff1910 FS: 0000000000000000(0000) GS:ffff88007fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fcfa88ad000 CR3: 0000000001c0a000 CR4: 00000000001426e0 Stack: 0000000000000001 ffff88007c3ac000 0000000000000000 ffffffffa02d5000 0000000000000000 0000000000000003 000000006a10b800 0000000000000001 ffffffffa02d5000 ffff880079b5f890 ffffffff81127a1b 0000000000000001 Call Trace: [<ffffffffa02d5000>] ? 0xffffffffa02d5000 [<ffffffffa02d5000>] ? 0xffffffffa02d5000 [<ffffffff81127a1b>] smp_call_function_many+0x20b/0x250 [<ffffffffa02db5c7>] kvm_make_all_cpus_request+0x107/0x160 [kvm] [<ffffffffa02db640>] kvm_flush_remote_tlbs+0x20/0x40 [kvm] [<ffffffffa02db7cc>] kvm_mmu_notifier_clear_flush_young+0x5c/0x90 [kvm] [<ffffffff8121346e>] __mmu_notifier_clear_flush_young+0x5e/0x90 [<ffffffff811f67eb>] page_referenced_one+0x12b/0x150 [<ffffffff811f82a6>] rmap_walk+0x256/0x470 [<ffffffff811f85c4>] page_referenced+0x104/0x1f0 [<ffffffff811f66c0>] ? page_check_address_transhuge+0x440/0x440 [<ffffffff811f7c40>] ? page_get_anon_vma+0x1b0/0x1b0 [<ffffffff811cb657>] shrink_page_list+0x567/0xbe0 [<ffffffff811cc420>] shrink_inactive_list+0x200/0x500 [<ffffffff811cd072>] shrink_zone_memcg+0x5a2/0x780 [<ffffffff81022496>] ? __switch_to_xtra+0x166/0x1b0 [<ffffffff811cd32d>] shrink_zone+0xdd/0x300 [<ffffffff811ce5e0>] kswapd+0x500/0x9e0 [<ffffffff811ce0e0>] ? mem_cgroup_shrink_node_zone+0x170/0x170 [<ffffffff810c48c8>] kthread+0xd8/0xf0 [<ffffffff810c47f0>] ? kthread_worker_fn+0x180/0x180 [<ffffffff817cd43f>] ret_from_fork+0x3f/0x70 [<ffffffff810c47f0>] ? kthread_worker_fn+0x180/0x180 Code: 00 00 75 70 48 83 c4 38 5b 41 5c 5d c3 48 8d 75 c8 48 89 d1 89 df 4c 89 e2 e8 15 fe ff ff 8b 55 e0 83 e2 01 74 cf f3 90 8b 55 e0 <83> e2 01 75 f6 eb c3 8b 05 73 5a e1 00 85 c0 75 85 80 3d 4e 11
Created attachment 1146042 [details] File: dmesg
*********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 24 kernel bugs. Fedora 24 has now been rebased to 4.7.4-200.fc24. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you have moved on to Fedora 25, and are still experiencing this issue, please change the version to Fedora 25. If you experience different issues, please open a new bug report for those.
*********** MASS BUG UPDATE ************** This bug is being closed with INSUFFICIENT_DATA as there has not been a response in 4 weeks. If you are still experiencing this issue, please reopen and attach the relevant data from the latest kernel you are running and any data that might have been requested previously.