Description of problem: I am running 3.7.0-0.rc4.git1.2.fc19.i686.PAE and am seeing kswapd0 take up about 97 to 99 percent of a cpu. There are two cpus on the machine. This is while running yum and rsync. I have a 10 GiB swap partition, but according to top it isn't actually being used.(Eventually a small amount was used.) When yum stops or pauses for confirmation kswapd0 cpu usage drops to almost nothing. When it restarts, then its cpu usage climbs back up within about a minute. top - 08:40:03 up 40 min, 7 users, load average: 2,12, 1,68, 2,52 Tasks: 200 total, 3 running, 196 sleeping, 0 stopped, 1 zombie %Cpu0 : 5,6 us, 81,6 sy, 0,0 ni, 1,3 id, 10,5 wa, 0,0 hi, 1,0 si, 0,0 st %Cpu1 : 14,5 us, 39,4 sy, 0,0 ni, 0,0 id, 44,8 wa, 0,0 hi, 1,3 si, 0,0 st KiB Mem: 2065708 total, 1988992 used, 76716 free, 34848 buffers KiB Swap: 10482612 total, 1464 used, 10481148 free, 1220156 cached
I found this in my logs that might be related: Nov 9 23:21:41 bruno kernel: [ 1516.032006] BUG: soft lockup - CPU#0 stuck for 22s! [kswapd0:30] That was with 3.7.0-0.rc4.git0.1.fc19.i686.PAE. This morning I am trying 3.7.0-0.rc4.git2.2.fc19.i686.PAE and I am not seeing kswapd0 go wild yet running rpm. I'll test yum again in a bit. The system is an athlon mp with two processors. My file systems are ext4 on top of luks encrypted devices on top of software raid 1 devices. (/boot isn't encrypted.)
Eventually I did see kswapd0 hit 90+% usage while running yum. I'm going to try using 3.6.6-4 for a bit and make sure I kswapd0 isn't doing that with 3.6 kernels.
I am still seeing this with 3.7.0-0.rc4.git3.2.fc19.i686.PAE.
I am not seeing as much slow down in the system (at least so far) as I did with some of the earlier kernels. It got to where yum updates ground to a halt. So there may have been some other problem that was really giving me grief. Though it still seems odd that kswapd0 goes cpu bound for an extended period when it doesn't look like there is significant swapping going on. I haven't noticed that happening with 3.6 kernels.
This is probably a duplicate of 866988 *** This bug has been marked as a duplicate of bug 866988 ***