Red Hat Bugzilla – Bug 173143
fully-synced raid 1 increases idle load avg without eating cpu
Last modified: 2007-11-30 17:11:17 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.8) Gecko/20051103 Fedora/1.5-0.5.0.rc1 Firefox/1.5
Description of problem:
I had a chance to notice that raid 1 resyncing was slow on kernel 1.1657_FC5 after trying 1660 and 1663, that both froze shortly after zebra started up. I'd collect stack traces for a bug report, but today's 1665 fixed it, so I won't bother.
The symptom was that raid resyncing was fixed at the slowest speed set at /proc/sys/dev/raid/speed_limit_min, even for a completely idle system (other than the raid syncing, of course). Bumping the limit up would get raid resyncing to speed up accordingly.
I didn't have a chance to test resyncing on 1665 yet, but I suspect there's going to be some change because, unlike 1657, the load average is now stuck above N, where N is the number of active RAID 1 devices, all of them fully synced. I couldn't find any oopses in /var/log/messages, and the fans on the affected notebooks are not active, so it's clear that this higher-than-expected load average is a mistake. There isn't anything eating CPU like crazy, it's just incorrect accounting, it seems. One of the affected boxes, an Athlon64 notebook tracking x86_64 rawhide, has 8 RAID 1 devices, and its load is stuck slightly above 8. An i686 notebook tracking i386 rawhide has 5 RAID 1 devices, and its load is stuck slightly above 5. None of them have any other active RAID devices, so I can't tell whether the problem is exclusive to RAID 1.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Boot the kernel up
Actual Results: Load avg is off by the number of active RAID 1 devices
Expected Results: It should go back to normal
Sorry, filed against wrong component.
I have confirmation that it's the number of active raid devices that causes the
increased load, and I've found out another side effect of this problem: it
prevents swsusp from working. The problem is still present on 1688_FC5,
unfortunately. The symptom is that, when swsusp tries to stop all tasks, it
fails after a few seconds and complains that the raid-controlling processes
won't stop, and the system comes back to activity instead of going to sleep :-(
This was fixed a while before the 2.6.15 release.