RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 618486 - KVM guest occasionally hangs during kdump with CPU spinning in qemu-kvm
Summary: KVM guest occasionally hangs during kdump with CPU spinning in qemu-kvm
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.0
Hardware: All
OS: Linux
medium
high
Target Milestone: beta
: ---
Assignee: Gleb Natapov
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 524819 Rhel6KvmTier1
TreeView+ depends on / blocked
 
Reported: 2010-07-27 03:32 UTC by Qian Cai
Modified: 2013-12-09 00:50 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 612421
Environment:
Last Closed: 2010-12-27 09:22:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 1 Neil Horman 2010-07-29 15:02:57 UTC
cai, it sounds like this is fixed, can it be closed?

Comment 2 Qian Cai 2010-07-29 15:05:25 UTC
Yes, the host soft lockup (612421) was fixed, and this BZ is to track additional issue.

Occasionally, one of the guest can still stuck before starting of the kdump
kernel using the latest bits, and qemu-kvm consumed 100% CPU,

qemu-kvm      S ffff88013bc23080     0  2843      1 0x00000000
ffff8801154c1908 0000000000000082 0000000000000000 ffffffff81095a43
ffff880100000000 0000000181094adb ffff8801154c18a8 00000001000f7a65
ffff880133a27028 ffff8801154c1fd8 0000000000010518 ffff880133a27028
Call Trace:
[<ffffffff81095a43>] ? __hrtimer_start_range_ns+0x1a3/0x430
[<ffffffff81013c8e>] ? apic_timer_interrupt+0xe/0x20
[<ffffffff814cd4d8>] schedule_hrtimeout_range+0xc8/0x160
[<ffffffff81094cf0>] ? hrtimer_wakeup+0x0/0x30
[<ffffffff81095d04>] ? hrtimer_start_range_ns+0x14/0x20
[<ffffffff8117f629>] poll_schedule_timeout+0x39/0x60
[<ffffffff8117fca8>] do_select+0x588/0x6c0
[<ffffffff8117fde0>] ? __pollwait+0x0/0xf0
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff8117fed0>] ? pollwake+0x0/0x60
[<ffffffff814cd03e>] ? mutex_lock+0x1e/0x50
[<ffffffff81174e97>] ? pipe_read+0x2a7/0x4e0
[<ffffffff811808ca>] core_sys_select+0x18a/0x2c0
[<ffffffff81091940>] ? autoremove_wake_function+0x0/0x40
[<ffffffff8109be89>] ? ktime_get_ts+0xa9/0xe0
[<ffffffff81180c57>] sys_select+0x47/0x110
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b
qemu-kvm      S ffff880115210038     0  2845      1 0x00000000
ffff8801229bfc68 0000000000000082 ffff880118d9e00e 0000000000000061
ffff880126a9d5a8 ffff880126a9d538 ffff8801229bfc18 ffffffffa02e8115
ffff88011fc24638 ffff8801229bffd8 0000000000010518 ffff88011fc24638
Call Trace:
[<ffffffffa02e8115>] ? __vmx_load_host_state+0xf5/0x110 [kvm_intel]
[<ffffffffa02e813e>] ? vmx_vcpu_put+0xe/0x10 [kvm_intel]
[<ffffffffa0370fab>] ? kvm_arch_vcpu_put+0x1b/0x50 [kvm]
[<ffffffffa036ca25>] kvm_vcpu_block+0x75/0xc0 [kvm]
[<ffffffff81091940>] ? autoremove_wake_function+0x0/0x40
[<ffffffffa037e77d>] kvm_arch_vcpu_ioctl_run+0x45d/0xd90 [kvm]
[<ffffffffa036a1f2>] kvm_vcpu_ioctl+0x522/0x670 [kvm]
[<ffffffff810a5280>] ? do_futex+0x100/0xb00
[<ffffffff8117d732>] vfs_ioctl+0x22/0xa0
[<ffffffff8117dbfa>] do_vfs_ioctl+0x3aa/0x580
[<ffffffff8117de51>] sys_ioctl+0x81/0xa0
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b
qemu-kvm      R  running task        0  2846      1 0x00000000
ffff880118d8bc88 ffffffff814cbc86 000000004c4e2402 ffff880115288089
ffffffff81013ace ffff880118d8bc88 0000000000000005 0000000000000001
ffff8801152c5a98 ffff880118d8bfd8 0000000000010518 ffff8801152c5aa0
Call Trace:
[<ffffffff814cbc86>] ? thread_return+0x4e/0x778
[<ffffffff81013ace>] ? common_interrupt+0xe/0x13
[<ffffffff81013c8e>] ? apic_timer_interrupt+0xe/0x20
[<ffffffffa037e66c>] ? kvm_arch_vcpu_ioctl_run+0x34c/0xd90 [kvm]
[<ffffffffa036a1f2>] ? kvm_vcpu_ioctl+0x522/0x670 [kvm]
[<ffffffff81133ff7>] ? handle_pte_fault+0xf7/0xa40
[<ffffffff81013e2e>] ? call_function_single_interrupt+0xe/0x20
[<ffffffff8117d732>] ? vfs_ioctl+0x22/0xa0
[<ffffffff8117dbfa>] ? do_vfs_ioctl+0x3aa/0x580
[<ffffffff8117de51>] ? sys_ioctl+0x81/0xa0
[<ffffffff81013172>] ? system_call_fastpath+0x16/0x1b
kvm-pit-wq    S ffffe8ffffc11ec8     0  2844      2 0x00000000
ffff88011fe21e30 0000000000000046 0000000000000000 0000000000000000
0000000000000000 ffff8800bcfdb4e0 ffff880028216980 0000000100044733
ffff8801152c5068 ffff88011fe21fd8 0000000000010518 ffff8801152c5068
Call Trace:
[<ffffffff81091c2e>] ? prepare_to_wait+0x4e/0x80
[<ffffffffa0392410>] ? pit_do_work+0x0/0xf0 [kvm]
[<ffffffff8108c33c>] worker_thread+0x1fc/0x2a0
[<ffffffff81091940>] ? autoremove_wake_function+0x0/0x40
[<ffffffff8108c140>] ? worker_thread+0x0/0x2a0
[<ffffffff810915d6>] kthread+0x96/0xa0
[<ffffffff810141ca>] child_rip+0xa/0x20
[<ffffffff81091540>] ? kthread+0x0/0xa0
[<ffffffff810141c0>] ? child_rip+0x0/0x20

Comment 3 Neil Horman 2010-11-10 18:16:22 UTC
hmm, this sounds like a kvm issue.  Reassigning component


Note You need to log in before you can comment on or make changes to this bug.