Red Hat Bugzilla – Bug 121485
(VM)filesystem access slow, bad inode cache management?
Last modified: 2007-11-30 17:10:41 EST
The problem I've had on 5 different boxes was slow filesystem I/O.
stracing rsync showed lstat64 system calls issued by rsync processes
would take forever to complete (quite often 5-10 seconds); then a
burst of progress was made and then it came to a halt again. Hard
disk leds confirmed this access pattern, except for one of the boxes
that had two disks in RAID 1 resyncing, whose disk access was
constant. Very odd.
It might have to do with the system running out of real memory, still
with plenty of mostly-unused swap. In all cases, there was
competition for disk access between rsync (sometimes more than one)
and either prelink or updatedb. In at least one case, after updatedb
completed rsync became very fast.
Version-Release number of selected component (if applicable):
I noticed this, too, today.
updatedb ran in bursts, stopping in a lstat64() call for several
seconds. vmstat showed almost the entire processor time spent in
iowait. iostat showed no IO access to the disks (apart from some
smallish writes/reads now and then). The system was very sluggish
during that time, programs requiring disk access (even small ones like
"sudo kill <foo>") required several seconds.
After killing updatedb, everything went back to normal.
kernel is kernel-2.6.5-1.339.i686 on an AMD Duron
This should be fixed in the 349 kernel on