Description of problem: I'm not sure if it is an Eclipse bug (it can be i915 kernel/X driver, gnome shell, or ... ?! since I've not experienced this problem in another system (but that has 8GB ram, I should monitor its RAM usage though)), but it happens when using Eclipse (I didn't have such problems in F21 and below, this is new to F22). This is the output of "free -m" when eclipse is started: total used free shared buff/cache available Mem: 3879 1281 462 216 2135 2065 Swap: 4095 283 3812 And the VIRT, RES, SHR columns for eclipse(java) in top output: 3914856 498008 42304 After using it for some time, I noticed that kswapd is using 100% CPU. Free output shows increased mem usage in "shared" column (kswapd will use 100% when available memory reaches values around 200M and lower, this output is before reaching that level): total used free shared buff/cache available Mem: 3879 1812 306 1078 1760 684 Swap: 4095 316 3779 However, VIRT, RES and SHR column in top is not much different: 3949548 695852 35076 If I exit eclipse, shared memory usage doesn't drop immediately. However, after a few seconds (say, 10-15 seconds), shared memory usage column in 'free -m' output drops to the initial values. If I keep working in Eclipse while the available memory reaches very low values, kswapd starts using 100%CPU and system becomes very laggy. In extream cases, I've seen such reports in kernel logs: =========================================================================== hvlap.hvnet kernel: NOHZ: local_softirq_pending 282 hvlap.hvnet kernel: gnome-shell: page allocation failure: order:0, mode:0xa00d4 hvlap.hvnet kernel: CPU: 1 PID: 1590 Comm: gnome-shell Tainted: G OE 4.0.4-303.fc22.x86_64 #1 hvlap.hvnet kernel: Hardware name: LENOVO 7675KC1/7675KC1, BIOS 7NETB2WW (2.12 ) 04/18/2008 hvlap.hvnet kernel: 0000000000000000 00000000ea41e661 ffff8800b95634c8 ffffffff81783124 hvlap.hvnet kernel: 0000000000000000 00000000000a00d4 ffff8800b9563558 ffffffff811a5d8e hvlap.hvnet kernel: ffff88013bfeab08 0000000000000000 ffffffff00000000 ffff88013bfeab00 hvlap.hvnet kernel: Call Trace: hvlap.hvnet kernel: [<ffffffff81783124>] dump_stack+0x45/0x57 hvlap.hvnet kernel: [<ffffffff811a5d8e>] warn_alloc_failed+0xfe/0x170 hvlap.hvnet kernel: [<ffffffff811a9ce7>] __alloc_pages_nodemask+0x537/0xa10 hvlap.hvnet kernel: [<ffffffff811a8ebf>] ? get_page_from_freelist+0x2bf/0xab0 hvlap.hvnet kernel: [<ffffffff811f4798>] alloc_pages_vma+0xb8/0x230 hvlap.hvnet kernel: [<ffffffff8119f39e>] ? find_get_entry+0x1e/0xf0 hvlap.hvnet kernel: [<ffffffff811e54ad>] read_swap_cache_async+0xed/0x160 hvlap.hvnet kernel: [<ffffffff811e5635>] swapin_readahead+0x115/0x1e0 hvlap.hvnet kernel: [<ffffffff811ba92d>] shmem_swapin+0x6d/0xc0 hvlap.hvnet kernel: [<ffffffff81102e1d>] ? internal_add_timer+0x8d/0xc0 hvlap.hvnet kernel: [<ffffffff813a1a42>] ? radix_tree_lookup_slot+0x22/0x50 hvlap.hvnet kernel: [<ffffffff8119f39e>] ? find_get_entry+0x1e/0xf0 hvlap.hvnet kernel: [<ffffffff8119fcdd>] ? pagecache_get_page+0x2d/0x1e0 hvlap.hvnet kernel: [<ffffffff8119f39e>] ? find_get_entry+0x1e/0xf0 hvlap.hvnet kernel: [<ffffffff811bb539>] shmem_getpage_gfp+0x569/0x860 hvlap.hvnet kernel: [<ffffffff811bb8d0>] shmem_read_mapping_page_gfp+0x40/0x80 hvlap.hvnet kernel: [<ffffffffa019169f>] i915_gem_object_get_pages_gtt+0x30f/0x410 [i915] hvlap.hvnet kernel: [<ffffffffa018bd57>] i915_gem_object_get_pages+0x57/0xc0 [i915] hvlap.hvnet kernel: [<ffffffffa0191b8b>] i915_gem_object_pin_view+0x31b/0x8d0 [i915] hvlap.hvnet kernel: [<ffffffffa018388f>] i915_gem_execbuffer_reserve_vma.isra.16+0x6f/0x100 [i915] hvlap.hvnet kernel: [<ffffffffa0183c40>] i915_gem_execbuffer_reserve+0x320/0x390 [i915] hvlap.hvnet kernel: [<ffffffffa0184728>] i915_gem_do_execbuffer.isra.22+0x718/0x1100 [i915] hvlap.hvnet kernel: [<ffffffff81786fd6>] ? mutex_lock_interruptible+0x16/0x50 hvlap.hvnet kernel: [<ffffffffa0097c0d>] ? drm_gem_object_lookup+0x3d/0xb0 [drm] hvlap.hvnet kernel: [<ffffffff811fee09>] ? __kmalloc+0x1d9/0x2a0 hvlap.hvnet kernel: [<ffffffffa0186342>] i915_gem_execbuffer2+0xb2/0x2b0 [i915] hvlap.hvnet kernel: [<ffffffffa0098a5b>] drm_ioctl+0x1db/0x640 [drm] hvlap.hvnet kernel: [<ffffffffa0186290>] ? i915_gem_execbuffer+0x440/0x440 [i915] hvlap.hvnet kernel: [<ffffffff810ce9c6>] ? __dequeue_entity+0x26/0x40 hvlap.hvnet kernel: [<ffffffff81232046>] do_vfs_ioctl+0x2c6/0x4d0 hvlap.hvnet kernel: [<ffffffff81063d01>] ? __do_page_fault+0x161/0x440 hvlap.hvnet kernel: [<ffffffff81784f8c>] ? __schedule+0x2fc/0x970 hvlap.hvnet kernel: [<ffffffff812322d1>] SyS_ioctl+0x81/0xa0 hvlap.hvnet kernel: [<ffffffff81789749>] system_call_fastpath+0x12/0x17 hvlap.hvnet kernel: Mem-Info: hvlap.hvnet kernel: Node 0 DMA per-cpu: hvlap.hvnet kernel: CPU 0: hi: 0, btch: 1 usd: 0 hvlap.hvnet kernel: CPU 1: hi: 0, btch: 1 usd: 0 hvlap.hvnet kernel: Node 0 DMA32 per-cpu: hvlap.hvnet kernel: CPU 0: hi: 186, btch: 31 usd: 0 hvlap.hvnet kernel: CPU 1: hi: 186, btch: 31 usd: 34 hvlap.hvnet kernel: Node 0 Normal per-cpu: hvlap.hvnet kernel: CPU 0: hi: 186, btch: 31 usd: 0 hvlap.hvnet kernel: CPU 1: hi: 186, btch: 31 usd: 0 hvlap.hvnet kernel: active_anon:403373 inactive_anon:403935 isolated_anon:0 active_file:34828 inactive_file:30606 isolated_file:0 unevictable:4 dirty:0 writeback:0 unstable:0 free:69181 slab_reclaimable:10043 slab_unreclaimable:12956 mapped:74917 shmem:393914 pagetables:13246 bounce:0 free_cma:0 hvlap.hvnet kernel: Node 0 DMA free:12100kB min:268kB low:332kB high:400kB active_anon:708kB inactive_anon:3064kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15984kB managed:15900kB mlocked:0kB dirty:0kB writeback:0kB mapped:708kB shmem:3772kB slab_reclaimable:0kB slab_unreclaimable:12kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:24512 all_unreclaimable? yes hvlap.hvnet kernel: lowmem_reserve[]: 0 2966 3861 3861 hvlap.hvnet kernel: Node 0 DMA32 free:51620kB min:51712kB low:64640kB high:77568kB active_anon:1421864kB inactive_anon:1421020kB active_file:1064kB inactive_file:128kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3119808kB managed:3040436kB mlocked:0kB dirty:0kB writeback:0kB mapped:150784kB shmem:1570788kB slab_reclaimable:28568kB slab_unreclaimable:38624kB kernel_stack:5792kB pagetables:39540kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:17377912 all_unreclaimable? yes hvlap.hvnet kernel: lowmem_reserve[]: 0 0 894 894 hvlap.hvnet kernel: Node 0 Normal free:213004kB min:15600kB low:19500kB high:23400kB active_anon:190920kB inactive_anon:191656kB active_file:138248kB inactive_file:122296kB unevictable:16kB isolated(anon):0kB isolated(file):0kB present:983040kB managed:916416kB mlocked:16kB dirty:0kB writeback:0kB mapped:148176kB shmem:1096kB slab_reclaimable:11604kB slab_unreclaimable:13188kB kernel_stack:2624kB pagetables:13444kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no hvlap.hvnet kernel: lowmem_reserve[]: 0 0 0 0 hvlap.hvnet kernel: Node 0 DMA: 3*4kB (UM) 3*8kB (UEM) 4*16kB (UM) 3*32kB (UM) 4*64kB (UEM) 1*128kB (M) 3*256kB (UEM) 1*512kB (M) 2*1024kB (UM) 2*2048kB (MR) 1*4096kB (M) = 12100kB hvlap.hvnet kernel: Node 0 DMA32: 6913*4kB (UEM) 1404*8kB (EM) 412*16kB (EM) 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB (R) 1*4096kB (R) = 51620kB hvlap.hvnet kernel: Node 0 Normal: 159*4kB (UEM) 1186*8kB (UEM) 1704*16kB (UEM) 656*32kB (UEM) 360*64kB (UEM) 140*128kB (UEM) 80*256kB (EM) 46*512kB (EM) 28*1024kB (EM) 8*2048kB (M) 6*4096kB (MR) = 213004kB hvlap.hvnet kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB hvlap.hvnet kernel: 462424 total pagecache pages hvlap.hvnet kernel: 3044 pages in swap cache hvlap.hvnet kernel: Swap cache stats: add 161570, delete 158526, find 56455/73712 hvlap.hvnet kernel: Free swap = 4007424kB hvlap.hvnet kernel: Total swap = 4194300kB hvlap.hvnet kernel: 1029708 pages RAM hvlap.hvnet kernel: 0 pages HighMem/MovableOnly hvlap.hvnet kernel: 36520 pages reserved hvlap.hvnet kernel: 0 pages hwpoisoned =========================================================================== Version-Release number of selected component (if applicable): Kernel: 0.5-300.fc22.x86_64 eclipse-platform-4.4.2-6.fc22.x86_64 gnome-shell-3.16.2-1.fc22.x86_64 How reproducible: 100%, after using eclipse for some time.
1. I checked the other system. On that system, the amount of "shared" column doesn't increase like my system. It is always around 200-300M. I guess that the issue might be related to intel graphics driver 2. While I was thinking about switching to F21 because of Eclipse problems, I realized that I can run Eclipse in GTK2 mode. And since I've started using GTK2 mode the above problem has not happened anymore. 3. I always felt that F22 feels much slower on my system compared to F21 when I'm using Eclipse. Now I can confirm that it was also becuase of GTK3 SWT backend. Eclipse is now running "normally", while using GTK3 the CPU usage of eclipse was always high (even if I just switched to eclipse window and moved the cursor slightly) and it was running with much lag. With GTK2, everything is OK now. And my laptop is not that hot anymore. 4. There are a number of bugs in current GTK3 backend. Mylyn has a bug in GTK3: you can't see the status(e.g. unread, new) of issues. There are also coloring issues. Sometimes keyboard Ctrl/Shift/Alt/etc keys stop working in the editor until you switch the editor or in some cases open a dialog box (e.g. preferences) and close it again (IIRC, this might have happened with GTK2 too but it was very rare. With GTK3, it happens a lot). Considering all the issues, I've switched to using GTK2 backend on all systems. You might consider switching back to GTK2 as the default backend too.
This is weird issue, There are plans to update F22 Eclipse to 4.5 release as it fixes many issues with GTK3. Switching back to GTK2 is not really an option as it comes at costy price (regular crashes) when using the SWT Browser component.
Fortunately, the problem seems to be fixed with latest updates including Eclipse Mars release (I've installed all updates so I don't know which one was actually responsible for the problem).