Large dd operations under 2.4.* cause the system to become almost totally
unusable (e.g. netscape takes several minutes to load, ls -lR is extremely
slow, etc.) I don't think this is a VM issue, as vmstat shows minimal swap
Disk starvation is not absolute, as it was prior to 2.2.16, but it's pretty
bad, and this is an easy DoS that works as any user. I'd consider this a
security issue since it's really difficult for root to intervene locally or
remotely; of course, on 2.4, dd can also create really big files.
It's not an elevator bug: I'm pretty sure that this is a VM balancing bug, and
yes, it's easily reproducible and obviously needs fixing.
2.4.2-0.1.49 is *much* better overall, but I can still effectively starve the
entire system of disk access with a large dd operation. I can't do this on 2.2;
even with a dd running, ls -lR in the same directory runs, albeit in spurts. The
same test under 2.4 results in ls not doing very much.
Followup: it's not just an IDE issue. I also see disk starvation under 2.4.2-2
on an all-SCSI setup.
1) could you tru 2.4.3-5 from rawhide
2) could you use elvtune to change the defaults of the elevator ?
(or use a recent snapshot of powertweak (www.powertweak.org) for that)
2.4.3-5 seems no different wrt the elevator, and it seems to have other issues
What are suggested fair, low-latency values for elvtune? I couldn't improve
things beyond "slightly less starvation..."
Upgrading to 2.4.5-0.2.9 didn't improve I/O fairness.
2.4.3-12 also appears to be broken.
2.4.5-10 is also broken wrt dd usage. However, interactivity is somewhat
better when operating on large tarballs than earlier RH kernels.
Performance seems to be subjectively better in recent (2.4.7, 2.4.8-ac) kernels.
*** Bug 42355 has been marked as a duplicate of this bug. ***