Bug 703583 - Kernel panic in inet_csk_bind_conflict
Summary: Kernel panic in inet_csk_bind_conflict
Keywords:
Status: CLOSED DUPLICATE of bug 590187
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.0
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: rc
: 6.0
Assignee: Thomas Graf
QA Contact: Network QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-05-10 18:07 UTC by Neal Kim
Modified: 2018-11-14 20:37 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-10-25 10:47:07 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 590187 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Bugzilla 703578 0 urgent CLOSED Kernel panic in cache_alloc_refill 2021-02-22 00:41:40 UTC

Internal Links: 590187 703578

Description Neal Kim 2011-05-10 18:07:32 UTC
Description of problem:

Very commonly run into this in a test with transparent proxy (TPROXY) under a
load of around 600-700 Mbps with ~thousands connections/sec. from around 20000 unique IP addresses.

s01b01 login: general protection fault: 0000 [#1] SMP
last sysfs file: /sys/kernel/mm/ksm/run
CPU 13
Modules linked in: xt_TPROXY xt_socket nf_conntrack nf_defrag_ipv4
nf_tproxy_core ip6table_mangle ip6_tables iptable_mangle xt_MARK ip_tables
bmnet(P)(U) bmnetpub(U) 8021q garp stp llc bonding ipv6 tcp_westwood dm_mod tun
kvm_intel kvm uinput cdc_ether usbnet mii serio_raw i2c_i801 i2c_core iTCO_wdt
iTCO_vendor_support shpchp ioatdma i7core_edac edac_core mptscsih mptbase bnx2
ixgbe(U) dca mdio ext4 mbcache jbd2

Modules linked in: xt_TPROXY xt_socket nf_conntrack nf_defrag_ipv4
nf_tproxy_core ip6table_mangle ip6_tables iptable_mangle xt_MARK ip_tables
bmnet(P)(U) bmnetpub(U) 8021q garp stp llc bonding ipv6 tcp_westwood dm_mod tun
kvm_intel kvm uinput cdc_ether usbnet mii serio_raw i2c_i801 i2c_core iTCO_wdt
iTCO_vendor_support shpchp ioatdma i7core_edac edac_core mptscsih mptbase bnx2
ixgbe(U) dca mdio ext4 mbcache jbd2
Pid: 24437, comm: webproxy Tainted: P ----------------
2.6.32-71.15.1.el6.x86_64 #1 IBM System x -[7871AC1]-
RIP: 0010:[<ffffffff8144c950>] [<ffffffff8144c950>]
inet_csk_bind_conflict+0x50/0xf0
RSP: 0018:ffff880c21bdbdc8 EFLAGS: 00010286
RAX: ffff880c161d4480 RBX: ffffffff81c9a600 RCX: ffb8756d31e2b65a
RDX: ffb8756d31e2b65a RSI: 0000000000000090 RDI: ffff880581a2e340
RBP: ffff880c21bdbdc8 R08: 000000000100007f R09: 0000000000000000
R10: 0000000000000001 R11: 00000000fe07369e R12: ffffc90016cd99e0
R13: ffff880581a2e340 R14: 0000000000001006 R15: ffff880c63ad45c0
FS: 00007f39732e0f00(0000) GS:ffff8800282e0000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000014c3008 CR3: 0000000588fec000 CR4: 00000000000026e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process webproxy (pid: 24437, threadinfo ffff880c21bda000, task
ffff880c231bcb30)
Stack:
ffff880c21bdbe68 ffffffff8144cba6 ffff880c21bdbe68 ffffffff8147919d
<0> ffff880c21bdbe08 ffffffff814cb656 00000000ffffffff 0000000000001006
<0> ffff880c21bdbe68 ffffffff81401abc 0000000000000000 0100007f00000000
Call Trace:
[<ffffffff8144cba6>] inet_csk_get_port+0x1b6/0x4a0
[<ffffffff8147919d>] ? inet_addr_type+0xad/0x140
[<ffffffff814cb656>] ? _spin_lock_bh+0x16/0x40
[<ffffffff81401abc>] ? lock_sock_nested+0xac/0xc0
[<ffffffff814729ca>] inet_bind+0x10a/0x1f0
[<ffffffff813ff610>] sys_bind+0xd0/0xf0
[<ffffffff813ff6f6>] ? sys_setsockopt+0xc6/0xe0
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b
Code: 48 85 c9 75 26 eb 5c 0f 1f 40 00 8b 77 1c 85 f6 74 59 44 8b 48 1c 45 85
c9 74 50 44 39 ce 74 4b 0f 1f 00 48 85 d2 74 3b 48 89 d1 <48> 8b 11 48 8d 41 e0
48 39 c7 0f 18 0a 74 e9 0f b6 71 fa 40 80
RIP [<ffffffff8144c950>] inet_csk_bind_conflict+0x50/0xf0
RSP <ffff880c21bdbdc8>
---[ end trace f03d1ce078c40c19 ]---
Kernel panic - not syncing: Fatal exception in interrupt
Pid: 24437, comm: webproxy Tainted: P D ----------------
2.6.32-71.15.1.el6.x86_64 #1
Call Trace:
[<ffffffff814c8633>] panic+0x78/0x137
[<ffffffff814cc712>] oops_end+0xf2/0x100
[<ffffffff8101733b>] die+0x5b/0x90
[<ffffffff814cc252>] do_general_protection+0x152/0x160
[<ffffffff814cba25>] general_protection+0x25/0x30
[<ffffffff8144c950>] ? inet_csk_bind_conflict+0x50/0xf0
[<ffffffff8144cba6>] inet_csk_get_port+0x1b6/0x4a0
[<ffffffff8147919d>] ? inet_addr_type+0xad/0x140
[<ffffffff814cb656>] ? _spin_lock_bh+0x16/0x40
[<ffffffff81401abc>] ? lock_sock_nested+0xac/0xc0
[<ffffffff814729ca>] inet_bind+0x10a/0x1f0
[<ffffffff813ff610>] sys_bind+0xd0/0xf0
[<ffffffff813ff6f6>] ? sys_setsockopt+0xc6/0xe0
[<ffffffff81013172>] system_call_fastpath+0x16/0x1b

Version-Release number of selected component (if applicable):

kernel-2.6.32-71.15.1.el6.x86_64


How reproducible:

Very commonly run into this in a test with transparent proxy (TPROXY) under a
load of around 600-700 Mbps with ~thousands connections/sec. from around 20000 unique IP addresses.

  
Actual results:

Kernel panic.


Expected results:

No kernel panic.

Comment 2 Neal Kim 2011-05-10 18:18:11 UTC
Hi Neil,

This only happens in the customer's environment. I have been unable to reproduce this panic in our local environment.

Comment 3 RHEL Program Management 2011-05-11 06:00:58 UTC
Since RHEL 6.1 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 12 Thomas Graf 2011-10-25 10:47:07 UTC

*** This bug has been marked as a duplicate of bug 590187 ***


Note You need to log in before you can comment on or make changes to this bug.