Description of problem: kerneloops when creating a RHEL 5.3 VM on a Rawhide x86_64 box May 5 20:32:37 localhost kernel: device vnet0 entered promiscuous mode May 5 20:32:37 localhost kernel: br0: port 2(vnet0) entering learning state May 5 20:32:37 localhost kernel: kvm: 23668: cpu0 unhandled wrmsr: 0xc0010117 data 0 May 5 20:32:37 localhost kernel: kvm: 23668: cpu0 unhandled rdmsr: 0xc0010117 May 5 20:32:37 localhost kernel: kvm: 23668: cpu0 unhandled rdmsr: 0xc0010117 May 5 20:32:37 localhost kernel: ------------[ cut here ]------------ May 5 20:32:37 localhost kernel: kernel BUG at arch/x86/kvm/../../../virt/kvm/kvm_main.c:2117! May 5 20:32:37 localhost kernel: invalid opcode: 0000 [#5] SMP May 5 20:32:37 localhost kernel: last sysfs file: /sys/devices/virtual/net/vnet0/address May 5 20:32:37 localhost kernel: CPU 0 May 5 20:32:37 localhost kernel: Modules linked in: tun vboxnetflt vboxdrv fuse nfsd lockd nfs_acl auth_rpcgss exportfs bnep sco l2cap bluetooth sunrpc bridge stp llc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 cpufreq_ondemand powernow_k8 freq_table dm_multipath raid1 kvm_amd kvm uinput nvidia(P) snd_hda_codec_realtek snd_ca0106 snd_rawmidi snd_hda_intel snd_seq_device snd_hda_codec snd_ac97_codec forcedeth ppdev snd_hwdep ac97_bus snd_pcm snd_timer usb_storage firewire_ohci snd firewire_core soundcore k8temp snd_page_alloc crc_itu_t hwmon serio_raw pcspkr pata_amd parport_pc parport wmi joydev ata_generic pata_acpi nouveau drm i2c_algo_bit i2c_core [last unloaded: nf_nat] May 5 20:32:37 localhost kernel: Pid: 23670, comm: qemu-kvm Tainted: P D 2.6.29.1-111.fc11.x86_64 #1 M750SLI-DS4 May 5 20:32:37 localhost kernel: RIP: 0010:[<ffffffffa09fc1af>] [<ffffffffa09fc1af>] kvm_handle_fault_on_reboot+0x14/0x18 [kvm] May 5 20:32:37 localhost kernel: RSP: 0018:ffff8801628cfcd8 EFLAGS: 00010046 May 5 20:32:37 localhost kernel: RAX: ffff88011f8c2000 RBX: ffff88017e480000 RCX: 00000000817bb000 May 5 20:32:37 localhost kernel: RDX: ffff88017e480000 RSI: ffff8801628cfd04 RDI: 00000000817bb000 May 5 20:32:37 localhost kernel: RBP: ffff8801628cfcd8 R08: 0000000000000000 R09: ffff8801628cfd38 May 5 20:32:37 localhost kernel: R10: 00000000000008ca R11: 0000000000002c71 R12: ffff8801628da000 May 5 20:32:37 localhost kernel: R13: ffff88017e480058 R14: ffff88017e480a40 R15: ffff88017e480c40 May 5 20:32:37 localhost kernel: FS: 00007ff04ce2b910(0000) GS:ffffffff817bb000(0000) knlGS:0000000000000000 May 5 20:32:37 localhost kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b May 5 20:32:37 localhost kernel: CR2: 000000000049ecf0 CR3: 00000001606cd000 CR4: 00000000000006e0 May 5 20:32:37 localhost kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 May 5 20:32:37 localhost kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 May 5 20:32:37 localhost kernel: Process qemu-kvm (pid: 23670, threadinfo ffff8801628ce000, task ffff880224489720) May 5 20:32:37 localhost kernel: Stack: May 5 20:32:37 localhost kernel: ffff8801628cfd38 ffffffffa0a25bc2 0000000000008024 ffff88017e480000 May 5 20:32:37 localhost kernel: ffff8801628cfd28 00000000811b7a05 ffff88017e480000 ffff88017e480000 May 5 20:32:37 localhost kernel: ffff8801628da000 ffff88017e480058 ffff88017e480a40 ffff88017e480c40 May 5 20:32:37 localhost kernel: Call Trace: May 5 20:32:37 localhost kernel: [<ffffffffa0a25bc2>] svm_vcpu_run+0x205/0x43d [kvm_amd] May 5 20:32:37 localhost kernel: [<ffffffffa0a057e0>] kvm_arch_vcpu_ioctl_run+0x3f3/0x6c6 [kvm] May 5 20:32:37 localhost kernel: [<ffffffffa09fc4ba>] kvm_vcpu_ioctl+0xfb/0x470 [kvm] May 5 20:32:37 localhost kernel: [<ffffffff810e229b>] vfs_ioctl+0x22/0x87 May 5 20:32:37 localhost kernel: [<ffffffff810e2783>] do_vfs_ioctl+0x462/0x4a3 May 5 20:32:37 localhost kernel: [<ffffffff810e281a>] sys_ioctl+0x56/0x79 May 5 20:32:37 localhost kernel: [<ffffffff810113ba>] system_call_fastpath+0x16/0x1b May 5 20:32:37 localhost kernel: Code: 48 89 e5 0f 1f 44 00 00 31 c0 48 c7 86 80 00 00 00 e0 05 a2 a0 c9 c3 55 48 89 e5 0f 1f 44 00 00 80 3d e5 52 02 00 00 74 02 eb fe <0f> 0b eb fe 55 48 89 e5 41 54 53 0f 1f 44 00 00 31 db 49 89 fc May 5 20:32:37 localhost kernel: RIP [<ffffffffa09fc1af>] kvm_handle_fault_on_reboot+0x14/0x18 [kvm] May 5 20:32:37 localhost kernel: RSP <ffff8801628cfcd8> May 5 20:32:37 localhost kernel: ---[ end trace 920ee28f30d95939 ]--- May 5 20:32:38 localhost avahi-daemon[1934]: Registering new address record for fe80::3006:41ff:fe8d:a6fd on vnet0.*. May 5 20:32:40 localhost ntpd[2194]: Listening on interface #12 vnet0, fe80::3006:41ff:fe8d:a6fd#123 Enabled May 5 20:32:42 localhost nm-system-settings: Added default wired connection 'Auto vnet0' for /org/freedesktop/Hal/devices/net_32_06_41_8d_a6_fd May 5 20:32:43 localhost kerneloops: Submitted 1 kernel oopses to www.kerneloops.org Version-Release number of selected component (if applicable): virt-viewer-0.0.3-4.fc11.x86_64 libvirt-python-0.6.2-3.fc11.x86_64 libvirt-0.6.2-3.fc11.x86_64 virt-manager-0.7.0-4.fc11.x86_64 python-virtinst-0.400.3-7.fc11.noarch qemu-0.10-16.fc11.x86_64 How reproducible: Steps to Reproduce: 1.try creating a RHEL 5.3 VM using the GUI virt manager using the official RHEL ISO 2.kerneloops when console starts up Actual results: kerneloops Expected results: clean creation of a RHEL VM Additional info:
Summary: - 2.6.29.1-111.fc11.x86_64 on an amd64 machine - seen while installing RHEL5.3 - Hitting this BUG(): asmlinkage void kvm_handle_fault_on_reboot(void) { if (kvm_rebooting) /* spin while reset goes on */ while (true) ; /* Fault while not rebooting. We want the trace. */ BUG(); } -
Possibly relevant thread: http://thread.gmane.org/gmane.comp.emulators.kvm.devel/25326 Plenty of examples of this on kerneloops.org: http://www.kerneloops.org/search.php?search=kvm_handle_fault_on_reboot
So the trigger here is the vbox driver. From kerneloops: Module stats Oops number: #79008 (107 times) vboxdrv 106 kvm_handle_fault_on_reboot should handle these exceptions in a nicer way.
(In reply to comment #3) > So the trigger here is the vbox driver. Indeed, I missed this: qemu-kvm Tainted: P D Closing as WONTFIX; if upstream can make the warning go away, that would be good, but working around issues caused by proprietary modules isn't a priority for us in Fedora