Bug 618486 - KVM guest occasionally hangs during kdump with CPU spinning in qemu-kvm
KVM guest occasionally hangs during kdump with CPU spinning in qemu-kvm
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
6.0
All Linux
medium Severity high
: beta
: ---
Assigned To: Gleb Natapov
Virtualization Bugs
:
Depends On:
Blocks: 524819 Rhel6KvmTier1
  Show dependency treegraph
 
Reported: 2010-07-26 23:32 EDT by CAI Qian
Modified: 2013-12-08 19:50 EST (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 612421
Environment:
Last Closed: 2010-12-27 04:22:47 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 1 Neil Horman 2010-07-29 11:02:57 EDT
cai, it sounds like this is fixed, can it be closed?
Comment 2 CAI Qian 2010-07-29 11:05:25 EDT
Yes, the host soft lockup (612421) was fixed, and this BZ is to track additional issue.

Occasionally, one of the guest can still stuck before starting of the kdump
kernel using the latest bits, and qemu-kvm consumed 100% CPU,

qemu-kvm      S ffff88013bc23080     0  2843      1 0x00000000
ffff8801154c1908 0000000000000082 0000000000000000 ffffffff81095a43
ffff880100000000 0000000181094adb ffff8801154c18a8 00000001000f7a65
ffff880133a27028 ffff8801154c1fd8 0000000000010518 ffff880133a27028
Call Trace:
[<ffffffff81095a43>] ? __hrtimer_start_range_ns+0x1a3/0x430
[<ffffffff81013c8e>] ? apic_timer_interrupt+0xe/0x20
[<ffffffff814cd4d8>] schedule_hrtimeout_range+0xc8/0x160
[<ffffffff81094cf0>] ? hrtimer_wakeup+0x0/0x30
[<ffffffff81095d04>] ? hrtimer_start_range_ns+0x14/0x20
[<ffffffff8117f629>] poll_schedule_timeout+0x39/0x60
[<ffffffff8117fca8>] do_select+0x588/0x6c0
[<ffffffff8117fde0>] ? __pollwait+0x0/0xf0
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff814cd03e>] ? mutex_lock+0x1e/0x50
[<ffffffff81174e97>] ? pipe_read+0x2a7/0x4e0
[<ffffffff811808ca>] core_sys_select+0x18a/0x2c0
[<ffffffff81091940>] ? autoremove_wake_function+0x0/0x40
[<ffffffff8109be89>] ? ktime_get_ts+0xa9/0xe0
[<ffffffff81180c57>] sys_select+0x47/0x110
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b
qemu-kvm      S ffff880115210038     0  2845      1 0x00000000
ffff8801229bfc68 0000000000000082 ffff880118d9e00e 0000000000000061
ffff880126a9d5a8 ffff880126a9d538 ffff8801229bfc18 ffffffffa02e8115
ffff88011fc24638 ffff8801229bffd8 0000000000010518 ffff88011fc24638
Call Trace:
[<ffffffffa02e8115>] ? __vmx_load_host_state+0xf5/0x110 [kvm_intel]
[<ffffffffa02e813e>] ? vmx_vcpu_put+0xe/0x10 [kvm_intel]
[<ffffffffa0370fab>] ? kvm_arch_vcpu_put+0x1b/0x50 [kvm]
[<ffffffffa036ca25>] kvm_vcpu_block+0x75/0xc0 [kvm]
[<ffffffff81091940>] ? autoremove_wake_function+0x0/0x40
[<ffffffffa037e77d>] kvm_arch_vcpu_ioctl_run+0x45d/0xd90 [kvm]
[<ffffffffa036a1f2>] kvm_vcpu_ioctl+0x522/0x670 [kvm]
[<ffffffff810a5280>] ? do_futex+0x100/0xb00
[<ffffffff8117d732>] vfs_ioctl+0x22/0xa0
[<ffffffff8117dbfa>] do_vfs_ioctl+0x3aa/0x580
[<ffffffff8117de51>] sys_ioctl+0x81/0xa0
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b
qemu-kvm      R  running task        0  2846      1 0x00000000
ffff880118d8bc88 ffffffff814cbc86 000000004c4e2402 ffff880115288089
ffffffff81013ace ffff880118d8bc88 0000000000000005 0000000000000001
ffff8801152c5a98 ffff880118d8bfd8 0000000000010518 ffff8801152c5aa0
Call Trace:
[<ffffffff814cbc86>] ? thread_return+0x4e/0x778
[<ffffffff81013ace>] ? common_interrupt+0xe/0x13
[<ffffffff81013c8e>] ? apic_timer_interrupt+0xe/0x20
[<ffffffffa037e66c>] ? kvm_arch_vcpu_ioctl_run+0x34c/0xd90 [kvm]
[<ffffffffa036a1f2>] ? kvm_vcpu_ioctl+0x522/0x670 [kvm]
[<ffffffff81133ff7>] ? handle_pte_fault+0xf7/0xa40
[<ffffffff81013e2e>] ? call_function_single_interrupt+0xe/0x20
[<ffffffff8117d732>] ? vfs_ioctl+0x22/0xa0
[<ffffffff8117dbfa>] ? do_vfs_ioctl+0x3aa/0x580
[<ffffffff8117de51>] ? sys_ioctl+0x81/0xa0
[<ffffffff81013172>] ? system_call_fastpath+0x16/0x1b
kvm-pit-wq    S ffffe8ffffc11ec8     0  2844      2 0x00000000
ffff88011fe21e30 0000000000000046 0000000000000000 0000000000000000
0000000000000000 ffff8800bcfdb4e0 ffff880028216980 0000000100044733
ffff8801152c5068 ffff88011fe21fd8 0000000000010518 ffff8801152c5068
Call Trace:
[<ffffffff81091c2e>] ? prepare_to_wait+0x4e/0x80
[<ffffffffa0392410>] ? pit_do_work+0x0/0xf0 [kvm]
[<ffffffff8108c33c>] worker_thread+0x1fc/0x2a0
[<ffffffff81091940>] ? autoremove_wake_function+0x0/0x40
[<ffffffff8108c140>] ? worker_thread+0x0/0x2a0
[<ffffffff810915d6>] kthread+0x96/0xa0
[<ffffffff810141ca>] child_rip+0xa/0x20
[<ffffffff81091540>] ? kthread+0x0/0xa0
[<ffffffff810141c0>] ? child_rip+0x0/0x20
Comment 3 Neil Horman 2010-11-10 13:16:22 EST
hmm, this sounds like a kvm issue.  Reassigning component

Note You need to log in before you can comment on or make changes to this bug.