Bug 593035 - mount.nfs: page allocation failure. order:4, mode:0xc0d0
Summary: mount.nfs: page allocation failure. order:4, mode:0xc0d0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 16
Hardware: All
OS: Linux
low
high
Target Milestone: ---
Assignee: Steve Dickson
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 728003 (view as bug list)
Depends On:
Blocks: 730045
TreeView+ depends on / blocked
 
Reported: 2010-05-17 16:59 UTC by Orion Poplawski
Modified: 2012-08-07 19:22 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 730045 (view as bug list)
Environment:
Last Closed: 2012-07-10 23:01:11 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Orion Poplawski 2010-05-17 16:59:42 UTC
Description of problem:

Getting a couple of these a day since moving to F13 from F12

May 17 08:28:36 lynx kernel: mount.nfs: page allocation failure. order:4, mode:0xc0d0
May 17 08:28:36 lynx kernel: Pid: 16550, comm: mount.nfs Not tainted 2.6.33.3-85.fc13.i686 #1
May 17 08:28:36 lynx kernel: Call Trace:
May 17 08:28:36 lynx kernel: [<c076eab3>] ? printk+0xf/0x14
May 17 08:28:36 lynx kernel: [<c049e669>] __alloc_pages_nodemask+0x43f/0x4b4
May 17 08:28:36 lynx kernel: [<c049e6ed>] __get_free_pages+0xf/0x21
May 17 08:28:36 lynx kernel: [<e8973bd2>] nfs_idmap_new+0x24/0xee [nfs]
May 17 08:28:36 lynx kernel: [<e89510a5>] nfs4_set_client+0xc5/0x205 [nfs]
May 17 08:28:36 lynx kernel: [<e895176e>] nfs4_create_server+0xab/0x2e4 [nfs]
May 17 08:28:36 lynx kernel: [<c04c266d>] ? pcpu_alloc_area+0x22c/0x26b
May 17 08:28:36 lynx kernel: [<c04c215d>] ? pcpu_next_pop+0x28/0x2f
May 17 08:28:36 lynx kernel: [<c04c1fee>] ? cpumask_next+0x12/0x14
May 17 08:28:36 lynx kernel: [<c04c31e4>] ? pcpu_alloc+0x6de/0x6f3
May 17 08:28:36 lynx kernel: [<c04daf6c>] ? alloc_vfsmnt+0x81/0x10d
May 17 08:28:36 lynx kernel: [<e8959614>] nfs4_remote_get_sb+0x84/0x192 [nfs]
May 17 08:28:36 lynx kernel: [<c04ca1c7>] vfs_kern_mount+0x81/0x11a
May 17 08:28:36 lynx kernel: [<e895990a>] nfs_do_root_mount+0x50/0x6c [nfs]
May 17 08:28:36 lynx kernel: [<e8959aec>] nfs4_try_mount+0x42/0x8f [nfs]
May 17 08:28:36 lynx kernel: [<e895a801>] nfs_get_sb+0x638/0x83b [nfs]
May 17 08:28:36 lynx kernel: [<c04c215d>] ? pcpu_next_pop+0x28/0x2f
May 17 08:28:36 lynx kernel: [<c04c1fee>] ? cpumask_next+0x12/0x14
May 17 08:28:36 lynx kernel: [<c04c31e4>] ? pcpu_alloc+0x6de/0x6f3
May 17 08:28:36 lynx kernel: [<c04daf6c>] ? alloc_vfsmnt+0x81/0x10d
May 17 08:28:36 lynx kernel: [<c04c0e3b>] ? __kmalloc_track_caller+0x103/0x10f
May 17 08:28:36 lynx kernel: [<c04daf6c>] ? alloc_vfsmnt+0x81/0x10d
May 17 08:28:36 lynx kernel: [<c04c3212>] ? __alloc_percpu+0xa/0xc
May 17 08:28:36 lynx kernel: [<c04ca1c7>] vfs_kern_mount+0x81/0x11a
May 17 08:28:36 lynx kernel: [<c04ca2a5>] do_kern_mount+0x33/0xbd
May 17 08:28:36 lynx kernel: [<c04db93c>] do_mount+0x67e/0x6dd
May 17 08:28:36 lynx kernel: [<c04da362>] ? copy_mount_options+0x73/0xd2
May 17 08:28:36 lynx kernel: [<c04db9fc>] sys_mount+0x61/0x8f
May 17 08:28:36 lynx kernel: [<c0770b3c>] syscall_call+0x7/0xb
May 17 08:28:36 lynx kernel: Mem-Info:
May 17 08:28:36 lynx kernel: DMA per-cpu:
May 17 08:28:36 lynx kernel: CPU    0: hi:    0, btch:   1 usd:   0
May 17 08:28:36 lynx kernel: Normal per-cpu:
May 17 08:28:36 lynx kernel: CPU    0: hi:  186, btch:  31 usd:   0
May 17 08:28:36 lynx kernel: active_anon:21379 inactive_anon:24221 isolated_anon:0
May 17 08:28:36 lynx kernel: active_file:31339 inactive_file:30454 isolated_file:0
May 17 08:28:36 lynx kernel: unevictable:0 dirty:21 writeback:0 unstable:0
May 17 08:28:36 lynx kernel: free:27019 slab_reclaimable:11400 slab_unreclaimable:7251
May 17 08:28:36 lynx kernel: mapped:6888 shmem:13319 pagetables:1427 bounce:0
May 17 08:28:36 lynx kernel: DMA free:2528kB min:76kB low:92kB high:112kB active_anon:8kB inactive_anon:220kB active_file:4436kB inactive_file:1700kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15804kB mlocked:0kB dirty:0kB writeback:0kB mapped:80kB shmem:0kB slab_reclaimable:136kB slab_unreclaimable:196kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
May 17 08:28:36 lynx kernel: lowmem_reserve[]: 0 609 609 609
May 17 08:28:36 lynx kernel: Normal free:105548kB min:3120kB low:3900kB high:4680kB active_anon:85508kB inactive_anon:96664kB active_file:120920kB inactive_file:120116kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:624324kB mlocked:0kB dirty:84kB writeback:0kB mapped:27472kB shmem:53276kB slab_reclaimable:45464kB slab_unreclaimable:28808kB kernel_stack:2096kB pagetables:5708kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
May 17 08:28:36 lynx kernel: lowmem_reserve[]: 0 0 0 0
May 17 08:28:36 lynx kernel: DMA: 18*4kB 25*8kB 9*16kB 10*32kB 8*64kB 6*128kB 2*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2528kB
May 17 08:28:36 lynx kernel: Normal: 5751*4kB 8468*8kB 909*16kB 8*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 105548kB
May 17 08:28:36 lynx kernel: 80695 total pagecache pages
May 17 08:28:36 lynx kernel: 5583 pages in swap cache
May 17 08:28:36 lynx kernel: Swap cache stats: add 74262, delete 68679, find 54032/57916
May 17 08:28:36 lynx kernel: Free swap  = 811584kB
May 17 08:28:36 lynx kernel: Total swap = 908280kB
May 17 08:28:36 lynx kernel: 161390 pages RAM
May 17 08:28:36 lynx kernel: 0 pages HighMem
May 17 08:28:36 lynx kernel: 4055 pages reserved
May 17 08:28:36 lynx kernel: 73319 pages shared
May 17 08:28:36 lynx kernel: 89652 pages non-shared

Version-Release number of selected component (if applicable):
2.6.33.3-85.fc13.i686

Comment 1 Orion Poplawski 2010-05-21 16:53:56 UTC
Looks like this is starting NFS mount to fail which is a real problem for us as we automount home and data directories.

Comment 2 Orion Poplawski 2010-09-16 19:34:51 UTC
Showing up now on a x86_64 machine with 2GB RAM. 

2.6.35-3.fc14.x86_64

Comment 3 H.J. Lu 2010-10-23 00:03:29 UTC
I also saw it with kernel-2.6.34.7-59.fc13.x86_64 on
a x86_64 machine with 2GB RAM.

Comment 4 lejeczek 2011-01-06 10:35:44 UTC
F14
2.6.35.10-74.fc14.x86_64

could be a NIC's driver??
take a look a this link: http://linux.derkeiler.com/Mailing-Lists/Kernel/2004-01/0055.html
in the thread, there's been problems discussed that might relate to this problem, pity though cause it's an old!! problem

in my system it's RTL8111/8168B that has MTU set fairly large, yours?

there are another problems with these drivers:
https://bugzilla.redhat.com/show_bug.cgi?id=538920

so may system fails this way:

[22121.598093] lowmem_reserve[]: 0 0 0 0
[22121.598095] Node 0 DMA: 2*4kB 0*8kB 1*16kB 1*32kB 2*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15928kB
[22121.598099] Node 0 DMA32: 19231*4kB 0*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 77148kB
[22121.598103] Node 0 Normal: 3777*4kB 1*8kB 1*16kB 1*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = 16252kB
[22121.598108] 3260411 total pagecache pages
[22121.598108] 0 pages in swap cache
[22121.598109] Swap cache stats: add 0, delete 0, find 0/0
[22121.598110] Free swap  = 18546684kB
[22121.598111] Total swap = 18546684kB
[22121.598924] 4194303 pages RAM
[22121.598924] 77907 pages reserved
[22121.598924] 1721508 pages shared
[22121.598924] 2372956 pages non-shared
[22121.598924] SLUB: Unable to allocate memory on node 0 (gfp=0x20)
[22121.598924]   cache: kmalloc-8192, object size: 8192, buffer size: 8192, default order: 3, min order: 1
[22121.598924]   node 0: slabs: 14, objs: 56, free: 0
[22121.657142] swapper: page allocation failure. order:1, mode:0x4020
[22121.657144] Pid: 0, comm: swapper Not tainted 2.6.35.10-74.fc14.x86_64 #1
[22121.657145] Call Trace:
[22121.657146]  <IRQ>  [<ffffffff810da3ac>] __alloc_pages_nodemask+0x6fe/0x776
[22121.657151]  [<ffffffff8140d535>] ? tcp_v4_rcv+0x4d8/0x68a
[22121.657154]  [<ffffffff81107a34>] alloc_slab_page+0x48/0x4a
[22121.657156]  [<ffffffff811081eb>] new_slab+0x6d/0x1b6
[22121.657157]  [<ffffffff81108dda>] __slab_alloc+0x1fa/0x3b6
[22121.657160]  [<ffffffff813b71cd>] ? __netdev_alloc_skb+0x34/0x51
[22121.657162]  [<ffffffff8110b33e>] __kmalloc_node_track_caller+0xd3/0x135
[22121.657164]  [<ffffffff813b71cd>] ? __netdev_alloc_skb+0x34/0x51
[22121.657166]  [<ffffffff813b6b4d>] __alloc_skb+0x7c/0x13f
[22121.657168]  [<ffffffff813b71cd>] __netdev_alloc_skb+0x34/0x51
[22121.657173]  [<ffffffffa00b9001>] rtl8169_rx_interrupt.clone.35+0x1be/0x4bd [r8169]
[22121.657175]  [<ffffffff813bc47d>] ? __raw_local_irq_save+0x1b/0x21
[22121.657178]  [<ffffffffa00b983e>] rtl8169_poll+0x39/0x19d [r8169]
[22121.657181]  [<ffffffff81021ff6>] ? apic_write+0x16/0x18
[22121.657183]  [<ffffffff813bfc94>] net_rx_action+0xac/0x1bb
[22121.657186]  [<ffffffffa00b8a9c>] ? rtl8169_interrupt+0x29b/0x33f [r8169]
[22121.657188]  [<ffffffff81053a39>] __do_softirq+0xdd/0x199
[22121.657191]  [<ffffffff8100abdc>] call_softirq+0x1c/0x30
[22121.657192]  [<ffffffff8100c338>] do_softirq+0x46/0x82
[22121.657194]  [<ffffffff81053b99>] irq_exit+0x3b/0x7d
[22121.657196]  [<ffffffff8146fb85>] do_IRQ+0x9d/0xb4
[22121.657198]  [<ffffffff8146a093>] ret_from_intr+0x0/0x11
[22121.657199]  <EOI>  [<ffffffffa042e97f>] ? nfs_fattr_init+0x26/0x30 [nfs]
[22121.657209]  [<ffffffff8128f8fd>] ? raw_local_irq_enable+0xd/0x12
[22121.657211]  [<ffffffff8106b5d8>] ? sched_clock_idle_wakeup_event+0x17/0x1b
[22121.657213]  [<ffffffff8129087b>] acpi_idle_enter_simple+0xd7/0x10d
[22121.657215]  [<ffffffff81394201>] cpuidle_idle_call+0x8b/0xe9
[22121.657218]  [<ffffffff81008325>] cpu_idle+0xaa/0xcc
[22121.657220]  [<ffffffff81462a66>] start_secondary+0x24d/0x28e
[22121.657221] Mem-Info:
[22121.657222] Node 0 DMA per-cpu:
[22121.657223] CPU    0: hi:    0, btch:   1 usd:   0
[22121.657224] CPU    1: hi:    0, btch:   1 usd:   0
[22121.657225] CPU    2: hi:    0, btch:   1 usd:   0
[22121.657226] CPU    3: hi:    0, btch:   1 usd:   0
[22121.657227] CPU    4: hi:    0, btch:   1 usd:   0
[22121.657229] CPU    5: hi:    0, btch:   1 usd:   0
[22121.657229] Node 0 DMA32 per-cpu:
[22121.657231] CPU    0: hi:  186, btch:  31 usd: 161
[22121.657232] CPU    1: hi:  186, btch:  31 usd:  45
[22121.657233] CPU    2: hi:  186, btch:  31 usd:  21
[22121.657234] CPU    3: hi:  186, btch:  31 usd: 178
[22121.657235] CPU    4: hi:  186, btch:  31 usd: 172
[22121.657236] CPU    5: hi:  186, btch:  31 usd: 157
[22121.657237] Node 0 Normal per-cpu:
[22121.657238] CPU    0: hi:  186, btch:  31 usd: 168
[22121.657239] CPU    1: hi:  186, btch:  31 usd:  60
[22121.657240] CPU    2: hi:  186, btch:  31 usd: 177
[22121.657241] CPU    3: hi:  186, btch:  31 usd:  33
[22121.657242] CPU    4: hi:  186, btch:  31 usd: 183
[22121.657244] CPU    5: hi:  186, btch:  31 usd: 133
[22121.657246] active_anon:13595 inactive_anon:16420 isolated_anon:0
[22121.657247]  active_file:200792 inactive_file:3059529 isolated_file:0
[22121.657248]  unevictable:0 dirty:154097 writeback:0 unstable:0
[22121.657248]  free:27234 slab_reclaimable:634392 slab_unreclaimable:70650
[22121.657249]  mapped:3424 shmem:66 pagetables:1201 bounce:0
[22121.657250] Node 0 DMA free:15928kB min:12kB low:12kB high:16kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15296kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[22121.657255] lowmem_reserve[]: 0 3253 16131 16131
[22121.657257] Node 0 DMA32 free:76896kB min:3276kB low:4092kB high:4912kB active_anon:0kB inactive_anon:15640kB active_file:96416kB inactive_file:2414096kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3331812kB mlocked:0kB dirty:108872kB writeback:0kB mapped:12kB shmem:0kB slab_reclaimable:693504kB slab_unreclaimable:12004kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no


and lots of this kernel spits out

Comment 5 Orion Poplawski 2011-01-06 15:29:02 UTC
Seeing it with:

02:05.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5705M Gigabit Ethernet (rev 01)
2.6.34.7-66.fc13.i686

and

09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5756ME Gigabit Ethernet PCI Express
2.6.35.10-74.fc14.x86_64

MTU is 1500

Comment 6 Jeremy Sanders 2011-03-08 11:40:39 UTC
We're seeing this on 2.6.35.11-83.fc14.x86_64 on an Athlon 64 machine with 1.5GB RAM. The ethernet card is a 
00:14.0 Bridge: nVidia Corporation MCP51 Ethernet Controller (rev a1)
using the forcedeth driver. It's using autofs to mount nfs directories.

Comment 7 Orion Poplawski 2011-03-08 17:05:33 UTC
Still seeing this all the time on original machine.  Just saw it once on a Dell T3500 with 6GB ram under memory pressure with:

05:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5761 Gigabit Ethernet PCIe (rev 10)
2.6.34.7-66.fc13.x86_64

Comment 8 Jeremy Sanders 2011-04-11 08:29:38 UTC
For reference, we also have this in a Scientific Linux 6.0 kernel, so RHEL6 probably has the bug:

mount.nfs: page allocation failure. order:4, mode:0xd0
Pid: 3678, comm: mount.nfs Not tainted 2.6.32-71.18.2.el6.x86_64 #1
Call Trace:
 [<ffffffff8111ea06>] __alloc_pages_nodemask+0x706/0x850
 [<ffffffff811560e2>] kmem_getpages+0x62/0x170
 [<ffffffff81156cfa>] fallback_alloc+0x1ba/0x270
 [<ffffffff8115674f>] ? cache_grow+0x2cf/0x320
 [<ffffffff81156a79>] ____cache_alloc_node+0x99/0x160
 [<ffffffff8115728a>] kmem_cache_alloc_notrace+0xfa/0x130
 [<ffffffffa04d8477>] nfs_idmap_new+0x37/0x160 [nfs]
 [<ffffffff8126093a>] ? strlcpy+0x4a/0x60
 [<ffffffffa04a436e>] nfs4_set_client+0xfe/0x2f0 [nfs]
 [<ffffffffa04a45fa>] ? nfs_alloc_server+0x9a/0x130 [nfs]
 [<ffffffffa04a50e7>] nfs4_create_server+0xc7/0x330 [nfs]
 [<ffffffffa04b0210>] nfs4_remote_get_sb+0xa0/0x2c0 [nfs]
 [<ffffffff8117002b>] vfs_kern_mount+0x7b/0x1b0
 [<ffffffffa04b111f>] nfs_do_root_mount+0x7f/0xb0 [nfs]
 [<ffffffffa04b1262>] nfs4_try_mount+0x52/0xd0 [nfs]
 [<ffffffffa04b1ac2>] nfs_get_sb+0x4a2/0x9e0 [nfs]
 [<ffffffff8117002b>] vfs_kern_mount+0x7b/0x1b0
 [<ffffffff811701d2>] do_kern_mount+0x52/0x130
 [<ffffffff8118df57>] do_mount+0x2e7/0x870
 [<ffffffff8118e570>] sys_mount+0x90/0xe0
 [<ffffffff81013172>] system_call_fastpath+0x16/0x1b
Mem-Info:
Node 0 DMA per-cpu:
CPU    0: hi:    0, btch:   1 usd:   0
CPU    1: hi:    0, btch:   1 usd:   0
Node 0 DMA32 per-cpu:
CPU    0: hi:  186, btch:  31 usd: 172
CPU    1: hi:  186, btch:  31 usd:   0
active_anon:53675 inactive_anon:31656 isolated_anon:0
 active_file:92975 inactive_file:191790 isolated_file:0
 unevictable:0 dirty:44 writeback:0 unstable:0
 free:29340 slab_reclaimable:85537 slab_unreclaimable:17882
 mapped:6401 shmem:4549 pagetables:4421 bounce:0
Node 0 DMA free:8400kB min:332kB low:412kB high:496kB active_anon:0kB inactive_anon:4096kB active_file:2828kB inactive_file:268kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15300kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:104kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 2003 2003 2003
Node 0 DMA32 free:110572kB min:44720kB low:55900kB high:67080kB active_anon:214700kB inactive_anon:122528kB active_file:369072kB inactive_file:765704kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2051180kB mlocked:0kB dirty:176kB writeback:0kB mapped:25604kB shmem:18196kB slab_reclaimable:341624kB slab_unreclaimable:71524kB kernel_stack:1640kB pagetables:17684kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
Node 0 DMA: 14*4kB 11*8kB 8*16kB 8*32kB 7*64kB 6*128kB 4*256kB 3*512kB 2*1024kB 1*2048kB 0*4096kB = 8400kB
Node 0 DMA32: 15649*4kB 4162*8kB 599*16kB 62*32kB 6*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 111812kB
288826 total pagecache pages
110 pages in swap cache
Swap cache stats: add 734, delete 624, find 431959/431975
Free swap  = 2102048kB
Total swap = 2104504kB
523984 pages RAM
10380 pages reserved
129331 pages shared
390161 pages non-shared

This is the forcedeth driver, with standard MTU on x86-64 (AMD Athlon 64 x2 6000+ processor), on system with 2GB of ECC RAM.

Comment 9 Bug Zapper 2011-06-02 13:54:49 UTC
This message is a reminder that Fedora 13 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 13.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '13'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 13's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 13 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 10 Jeremy Sanders 2011-06-03 09:16:17 UTC
Seen in F14.

Comment 11 Jo Shields 2011-07-07 10:11:43 UTC
This is an upstream kernel bug, and not specific to Red Hat's kernels. I'm seeing the problem with Debian 6.0. Networking comes from "Intel Corporation 82574L Gigabit Network Connection", using e1000e.ko (i.e. I don't think it's a network driver problem, given the above reports with Realtek, NVIDIA and Broadcom networking)

Comment 12 Josh Boyer 2011-08-26 18:32:56 UTC
There were a number of reports of this upstream, and eventually they added the CONFIG_NFS_USE_NEW_IDMAPPER option to help cope with this:

http://www.spinics.net/lists/linux-nfs/msg22248.html

However, we don't have that option enabled in any of the current Fedora kernels and it takes a bit of userspace coordination.

Any sort of fix on f14 is going to be too invasive, so I'm moving this to f16 and hopefully we can get the option and userspace worked out before Beta.

Comment 13 Josh Boyer 2011-08-26 19:01:37 UTC
*** Bug 728003 has been marked as a duplicate of this bug. ***

Comment 14 Roderick Johnstone 2011-10-20 13:40:03 UTC
Re comment 12. Did F16 beta get this fix?

Comment 15 Steve Dickson 2011-11-14 15:31:02 UTC
(In reply to comment #14)
> Re comment 12. Did F16 beta get this fix?

No. Although I am working on enabling the new idmapper 
in upstream and rawhide. Once those are up and running 
I'll look into backporting the changes to F16.

Comment 16 Ray Van Dolson 2012-03-09 18:25:57 UTC
This is impacting me on RHEL6.  Will open a SR and see if there is a RHEL6-specific bug already opened.

Comment 17 Dave Jones 2012-03-22 17:07:21 UTC
[mass update]
kernel-3.3.0-4.fc16 has been pushed to the Fedora 16 stable repository.
Please retest with this update.

Comment 18 Dave Jones 2012-03-22 17:10:36 UTC
[mass update]
kernel-3.3.0-4.fc16 has been pushed to the Fedora 16 stable repository.
Please retest with this update.

Comment 19 Dave Jones 2012-03-22 17:20:42 UTC
[mass update]
kernel-3.3.0-4.fc16 has been pushed to the Fedora 16 stable repository.
Please retest with this update.

Comment 20 Dave Jones 2012-07-10 21:50:08 UTC
Orion, are you still seeing this with all the F16 updates applied ?

Comment 21 Orion Poplawski 2012-07-10 22:41:25 UTC
I don't see it in any of our logs going back the standard 4 weeks of rotation.


Note You need to log in before you can comment on or make changes to this bug.