Bug 168907 - kernel: Unable to handle kernel paging request
kernel: Unable to handle kernel paging request
Status: CLOSED NOTABUG
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
3
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Dave Jones
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-09-21 02:34 EDT by Bent Terp
Modified: 2015-01-04 17:22 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-09-24 01:45:42 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Bent Terp 2005-09-21 02:34:02 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.10) Gecko/20050720 Fedora/1.0.6-1.1.fc3 Firefox/1.0.6

Description of problem:
Occasional entries in /var/log/messages. Server keeps running, sometimes recommending reboot. Dual opteron on Tyan 2882 motherboard.

Sep 21 04:07:19 gemini kernel: Unable to handle kernel paging request at 0000000000004b30 RIP:
Sep 21 04:07:19 gemini kernel: <ffffffff80163215>{kfree+104}
Sep 21 04:07:19 gemini kernel: PGD 68a9c067 PUD ff99067 PMD 0
Sep 21 04:07:19 gemini kernel: Oops: 0000 [2] SMP
Sep 21 04:07:19 gemini kernel: CPU 0
Sep 21 04:07:19 gemini kernel: Modules linked in: vmnet(U) parport_pc vmmon(U) iptable_nat iptable_mangle ipt_REJECT ipt_sta
te ip_conntrack iptable_filter ip_tables nfsd lockd lp parport md5 ipv6 autofs4 sunrpc pcmcia yenta_socket rsrc_nonstatic pc
mcia_core xfs exportfs dm_mod ohci_hcd i2c_amd8111 i2c_amd756 i2c_core tg3 floppy ext3 jbd 3w_xxxx sata_sil libata sd_mod sc
si_mod
Sep 21 04:07:19 gemini kernel: Pid: 97, comm: kswapd0 Tainted: P   M  2.6.12-1.1376_FC3smp
Sep 21 04:07:19 gemini kernel: RIP: 0010:[<ffffffff80163215>] <ffffffff80163215>{kfree+104}
Sep 21 04:07:19 gemini kernel: RSP: 0018:ffff810037c0fd68  EFLAGS: 00010013
Sep 21 04:07:19 gemini kernel: RAX: ffffffff7fffffff RBX: ffff810052b9ced0 RCX: 0000000000000000
Sep 21 04:07:19 gemini kernel: RDX: ffff8100765f1860 RSI: ffffffff80408260 RDI: 0000000000000003
Sep 21 04:07:19 gemini kernel: RBP: 0000000000000003 R08: 0000000000000003 R09: 0000000000000000
Sep 21 04:07:19 gemini kernel: R10: ffffffff80525ba0 R11: ffffffff801962b0 R12: ffff810052b9ced8
Sep 21 04:07:19 gemini kernel: R13: 000000000000006e R14: 00000000000d29bd R15: 0000000000000080
Sep 21 04:07:19 gemini kernel: FS:  00002aaaaaac1b00(0000) GS:ffffffff804e0c80(0000) knlGS:00000000f7fe76c0
Sep 21 04:07:19 gemini kernel: CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
Sep 21 04:07:20 gemini kernel: CR2: 0000000000004b30 CR3: 000000015d0db000 CR4: 00000000000006e0
Sep 21 04:07:20 gemini kernel: Process kswapd0 (pid: 97, threadinfo ffff810037c0e000, task ffff810105187740)
Sep 21 04:07:20 gemini kernel: Stack: 0000000000000206 ffff810052b9ced0 ffff810052b9ced0 ffffffff80192ce0
Sep 21 04:07:20 gemini kernel:        ffff810053b05a48 ffffffff801933c2 ffff81000384e038 0000000000000021
Sep 21 04:07:20 gemini kernel:        ffff810037ff8ec0 000000000000008f
Sep 21 04:07:20 gemini kernel: Call Trace:<ffffffff80192ce0>{d_free+43} <ffffffff801933c2>{prune_dcache+418}
Sep 21 04:07:20 gemini kernel:        <ffffffff80193909>{shrink_dcache_memory+23} <ffffffff801661c4>{shrink_slab+188}
Sep 21 04:07:20 gemini kernel:        <ffffffff8016767c>{balance_pgdat+583} <ffffffff80167901>{kswapd+313}
Sep 21 04:07:20 gemini kernel:        <ffffffff8014a285>{autoremove_wake_function+0} <ffffffff8014a285>{autoremove_wake_func
tion+0}
Sep 21 04:07:20 gemini kernel:        <ffffffff8012fdca>{schedule_tail+57} <ffffffff8010f497>{child_rip+8}
Sep 21 04:07:20 gemini kernel:        <ffffffff801677c8>{kswapd+0} <ffffffff8010f48f>{child_rip+0}
Sep 21 04:07:20 gemini kernel:
Sep 21 04:07:20 gemini kernel:
Sep 21 04:07:20 gemini kernel: Code: 48 8b 91 30 4b 00 00 76 07 b8 00 00 00 80 eb 0a 48 b8 00 00
Sep 21 04:07:20 gemini kernel: RIP <ffffffff80163215>{kfree+104} RSP <ffff810037c0fd68>
Sep 21 04:07:20 gemini kernel: CR2: 0000000000004b30
Sep 21 04:07:20 gemini kernel:  <3>Debug: sleeping function called from invalid context at include/linux/rwsem.h:43
Sep 21 04:07:20 gemini kernel: in_atomic():0, irqs_disabled():1
Sep 21 04:07:20 gemini kernel:
Sep 21 04:07:20 gemini kernel: Call Trace:<ffffffff8012ff03>{__might_sleep+193} <ffffffff80136842>{profile_task_exit+34}
Sep 21 04:07:20 gemini kernel:        <ffffffff80137e84>{do_exit+34} <ffffffff80202b52>{vgacon_cursor+228}
Sep 21 04:07:20 gemini kernel:        <ffffffff801225f8>{do_page_fault+1904} <ffffffff8010f2e1>{error_exit+0}
Sep 21 04:07:20 gemini kernel:        <ffffffff801962b0>{generic_drop_inode+0} <ffffffff80163215>{kfree+104}
Sep 21 04:07:20 gemini kernel:        <ffffffff80192ce0>{d_free+43} <ffffffff801933c2>{prune_dcache+418}
Sep 21 04:07:20 gemini kernel:        <ffffffff80193909>{shrink_dcache_memory+23} <ffffffff801661c4>{shrink_slab+188}
Sep 21 04:07:20 gemini kernel:        <ffffffff8016767c>{balance_pgdat+583} <ffffffff80167901>{kswapd+313}
Sep 21 04:07:20 gemini kernel:        <ffffffff8014a285>{autoremove_wake_function+0} <ffffffff8014a285>{autoremove_wake_func
tion+0}
Sep 21 04:07:20 gemini kernel:        <ffffffff8012fdca>{schedule_tail+57} <ffffffff8010f497>{child_rip+8}
Sep 21 04:07:20 gemini kernel:        <ffffffff801677c8>{kswapd+0} <ffffffff8010f48f>{child_rip+0}
Sep 21 04:07:20 gemini kernel:


Version-Release number of selected component (if applicable):
kernel-smp-2.6.12-1.1376_FC3

How reproducible:
Sometimes

Steps to Reproduce:
1. Generate high demands for memory
2. Wait
3. Check log files

  

Additional info:
Comment 1 Bent Terp 2005-09-21 02:40:47 EDT
This bug shows more than passing semblance to #138990, so I added Dave as cc: -
given that I'm not allowed to merely reopen the old one.
/Benty
Comment 2 Dave Jones 2005-09-24 01:45:42 EDT
most of the time when the core vm code which scans lots of lists oopses, its due
to either bad memory, or memory corruption.  Can you reproduce this without the
vmware modules ever being loaded ?

You've also hit a machine check exception (Note the 'M' flag in the tainted
output). This is usually indicative of a hardware fault of some sort (bad ram --
run memtest overnight to see if it picks anything up, or maybe insufficient
cooling, or not strong enough PSU).

You may also want to update to the latest errata, though in this case it does
sound like a hardware problem of some sort.
Comment 3 Dan Carpenter 2005-10-11 14:09:05 EDT
Was it a hardware problem?

If not, how much memory do you have and what does your /proc/mttr look like?

Note You need to log in before you can comment on or make changes to this bug.