Looks like struct idmap is about 40k on a 64-bit architecture.
I wonder whether it would be possible to make the transition to the new idmapper on rhel6? That would solve a number of problems.
(In reply to comment #3)
> Looks like struct idmap is about 40k on a 64-bit architecture.
>
> I wonder whether it would be possible to make the transition to the new
> idmapper on rhel6? That would solve a number of problems.
I agree... lets how enabling the new idmapper in upstream
and rawhide pans out... If well, we should consider enabling
the code in 6.3... imho...
I believe this is impacting me as well although the triggering process is mount.nfs rather than mount.nfs4.
Have opened SR #612178 for the issue with Red Hat Support.
Are there any workarounds for this issue currently?
Description of problem: While running KT1 tests, connectathon test failed to do mount: mount.nfs4: Cannot allocate memory Same mount before and after succeeded. It doesn't appear that OOM has been invoked. mount.nfs4: page allocation failure. order:4, mode:0xd0 CPU: 1 Tainted: G ---------------- T 2.6.32-218.el6.s390x #1 Process mount.nfs4 (pid: 31853, task: 000000001d2ea990, ksp: 000000000199f390) 000000000199f6e8 000000000199f668 0000000000000002 0000000000000000 000000000199f708 000000000199f680 000000000199f680 00000000004cbb50 000000001fe45fb6 0000000000000000 00000000000000d0 0000000000000000 000000000000000d 000000000000000c 000000000199f6d8 0000000000000000 0000000000000000 00000000001051bc 000000000199f668 000000000199f6a8 Call Trace: ([<00000000001050bc>] show_trace+0xe8/0x138) [<00000000002066ce>] __alloc_pages_nodemask+0x80a/0xa40 [<0000000000243b56>] cache_alloc_refill+0x3e2/0x6d8 [<00000000002443ca>] kmem_cache_alloc_notrace+0xa6/0xf8 [<000003c002ca7f86>] nfs_idmap_new+0x52/0x184 [nfs] [<000003c002c6984c>] nfs4_init_client+0x9c/0x238 [nfs] [<000003c002c6a1d0>] nfs_get_client+0x634/0x798 [nfs] [<000003c002c6a3ce>] nfs4_set_client+0x9a/0x134 [nfs] [<000003c002c6abde>] nfs4_create_server+0xe6/0x378 [nfs] [<000003c002c77fc0>] nfs4_remote_get_sb+0xa4/0x2b8 [nfs] [<0000000000258444>] vfs_kern_mount+0x74/0x1bc [<000003c002c783fa>] nfs_do_root_mount+0x8a/0xc0 [nfs] [<000003c002c78a64>] nfs4_try_mount+0x70/0xec [nfs] [<000003c002c78de8>] nfs4_get_sb+0x308/0x3b0 [nfs] [<0000000000258444>] vfs_kern_mount+0x74/0x1bc [<00000000002585f4>] do_kern_mount+0x54/0x128 [<0000000000278038>] do_mount+0x2fc/0x96c [<000000000027874c>] SyS_mount+0xa4/0xf0 [<000000000011863c>] sysc_tracego+0xe/0x14 [<000003fffcf8e442>] 0x3fffcf8e442 Mem-Info: DMA per-cpu: CPU 0: hi: 186, btch: 31 usd: 156 CPU 1: hi: 186, btch: 31 usd: 0 active_anon:9665 inactive_anon:15099 isolated_anon:0 active_file:19398 inactive_file:50552 isolated_file:0 unevictable:913 dirty:4780 writeback:0 unstable:0 free:4051 slab_reclaimable:9445 slab_unreclaimable:9702 mapped:8599 shmem:36 pagetables:335 bounce:0 DMA free:16204kB min:2876kB low:3592kB high:4312kB active_anon:38660kB inactive_anon:60396kB active_file:77592kB inactive_file:202208kB unevictable:3652kB isolated(anon):0kB isolated(file):0kB present:517120kB mlocked:0kB dirty:19120kB writeback:0kB mapped:34396kB shmem:144kB slab_reclaimable:37780kB slab_unreclaimable:38808kB kernel_stack:2688kB pagetables:1340kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 0 0 DMA: 363*4kB 226*8kB 607*16kB 101*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB = 16204kB 71009 total pagecache pages 71 pages in swap cache Swap cache stats: add 1047, delete 976, find 155/190 Free swap = 1013240kB Total swap = 1015800kB 131072 pages RAM 5212 pages reserved 87845 pages shared 71223 pages non-shared device eth0 left promiscuous mode Version-Release number of selected component (if applicable): 2.6.32-218.el6.s390x How reproducible: rarely Steps to Reproduce: 1. running connectathon test on s390x Actual results: failed mount Expected results: mount works every time Additional info: