| Summary: | __packet_get_status unable to handle kernel paging request | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Suqin Huang <shuang> | ||||
| Component: | kernel | Assignee: | Red Hat Kernel Manager <kernel-mgr> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Red Hat Kernel QE team <kernel-qe> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 6.0 | CC: | khong, mst, tburke | ||||
| Target Milestone: | rc | Keywords: | TestBlocker | ||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2011-03-03 08:07:53 UTC | Type: | --- | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Bug Depends On: | 580951 | ||||||
| Bug Blocks: | |||||||
| Attachments: |
|
||||||
|
Description
Suqin Huang
2011-02-28 08:46:12 UTC
from the result we tested before, it works in 2.6.32-71.12.1.el6.x86_64 Created attachment 481528 [details]
debug
(In reply to comment #3) > from the result we tested before, it works in 2.6.32-71.12.1.el6.x86_64 Do you mean it is a regression? Will it happen w/o vhost loaded? (In reply to comment #5) > (In reply to comment #3) > > from the result we tested before, it works in 2.6.32-71.12.1.el6.x86_64 > > Do you mean it is a regression? From the acceptance testing result we tested before, it works in 2.6.32-71.12.1.el6.x86_64, but kernel 2.6.32-71.12.1.el6.x86_64 is deleted now, I can not test it any more. this issue also can reproduce in 2.6.32-71.14.1.el6.x86_64 Testing with vhost, and try to get complete log. Will report the result soon. can reproduce with vhost=on
1. cmd:
qemu-kvm -drive file='/usr/images/RHEL-Server-6.0-64-virtio.qcow2',index=0,if=none,id=drive-virtio-disk1,media=disk,cache=none,format=qcow2,aio=native -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk1,id=virtio-disk1 -device virtio-net-pci,netdev=idvx5Ue1,mac=9a:f1:48:07:aa:1f,id=ndev00idvx5Ue1,bus=pci.0,addr=0x3 -netdev tap,id=idvx5Ue1,vhost=on,script='/usr/scripts/qemu-ifup-switch',downscript='no' -m 2048 -smp 2,cores=1,threads=1,sockets=2 -cpu cpu64-rhel6,+sse2,+x2apic -vnc :1 -rtc base=utc,clock=host,driftfix=none -M rhel6.0.0 -boot order=cdn,once=c,menu=off -usbdevice tablet -no-kvm-pit-reinjection -enable-kvm -incoming tcp:0:5200
2. vmcore:
PID: 9495 TASK: ffff88020e9f54e0 CPU: 1 COMMAND: "tcpdump"
#0 [ffff880215949790] machine_kexec at ffffffff8103697b
#1 [ffff8802159497f0] crash_kexec at ffffffff810b9078
#2 [ffff8802159498c0] oops_end at ffffffff814cc900
#3 [ffff8802159498f0] no_context at ffffffff8104652b
#4 [ffff880215949940] __bad_area_nosemaphore at ffffffff810467b5
#5 [ffff880215949990] bad_area_nosemaphore at ffffffff81046883
#6 [ffff8802159499a0] do_page_fault at ffffffff814ce388
#7 [ffff8802159499f0] page_fault at ffffffff814cbc75
[exception RIP: __packet_get_status+58]
RIP: ffffffff814a024a RSP: ffff880215949aa8 RFLAGS: 00010213
RAX: 0000780000001000 RBX: 0000000000001000 RCX: ffff8802141aed80
RDX: 0000000000000000 RSI: 0000000000001000 RDI: 0000000000001000
RBP: ffff880215949ab8 R8: ffff880215948000 R9: 0000000000000000
R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000001000
R13: ffff8802155d7cc4 R14: ffff88021472aec0 R15: 0000000000000000
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
#8 [ffff880215949ac0] packet_lookup_frame at ffffffff814a0288
#9 [ffff880215949ae0] packet_poll at ffffffff814a0d0c
#10 [ffff880215949b10] sock_poll at ffffffff813fb5ca
#11 [ffff880215949b20] do_sys_poll at ffffffff8118274b
#12 [ffff880215949f40] sys_poll at ffffffff81182bcc
#13 [ffff880215949f80] system_call_fastpath at ffffffff81013172
RIP: 00007fad0e30cdf8 RSP: 00007fff3a7d2d50 RFLAGS: 00010286
RAX: 0000000000000007 RBX: ffffffff81013172 RCX: ffffffffffffffff
RDX: 00000000000003e8 RSI: 0000000000000001 RDI: 00007fff3a7d3830
RBP: 00000000000003e8 R8: 0000000000000000 R9: 0000000000000001
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000451980 R14: 00007fff3a7d3830 R15: 0000000001322360
ORIG_RAX: 0000000000000007 CS: 0033 SS: 002b
crash>
I am confused, sorry.
Which host kernel does have a problem?
Which host kernel does not?
You list one qemu command. Since this is during
You say:
>2. can not reproduce in rhel6.1 host
>2.6.32-118.el6.x86_64
so in which host does it reproduce?
2.6.32-71.18.1.el6.x86_64?
Also does it or does it not reprocuce without vhost=on? (In reply to comment #12) > I am confused, sorry. > Which host kernel does have a problem? > Which host kernel does not? > > You list one qemu command. Since this is during > > You say: > >2. can not reproduce in rhel6.1 host > >2.6.32-118.el6.x86_64 > > so in which host does it reproduce? > 2.6.32-71.18.1.el6.x86_64? reproduce in 2.6.32-71.18.1.el6.x86_64 So it's a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=623915 ? Does it happen without vhost=on or not? (In reply to comment #15) > So it's a duplicate of > https://bugzilla.redhat.com/show_bug.cgi?id=623915 It block RHEL6.0Z migration testing, can you clone it to RHEL6.0Z, or change this one to RHEL6.0Z? > ? > Does it happen without vhost=on or not? repeat 10 times without vhost=on, can not reproduce. So definitely a duplicate of 623915 Mark as such. *** This bug has been marked as a duplicate of bug 623915 *** Re Comment 16, Please do not enable vhost in 6.0 at all. |