User-Agent: Build Identifier: FC3 httpd when sending a testing 5GB file on x86-64 uses many small sendfile calls: ... sendfile(14, 16, [3144408424], 2136806040) = 49232 poll([{fd=14, events=POLLOUT, revents=POLLOUT}], 1, 120000) = 1 sendfile(14, 16, [3144457656], 2136756808) = 49232 poll([{fd=14, events=POLLOUT, revents=POLLOUT}], 1, 120000) = 1 sendfile(14, 16, [3144506888], 2136707576) = 33304 poll([{fd=14, events=POLLOUT, revents=POLLOUT}], 1, 120000) = 1 ... What's the reason why it just doesn't use sendfile for the whole file? Or is the same program managing multiple connections at once? vsftpd seems to use much bigger chunks (> 1GB). Reproducible: Always Steps to Reproduce: 1. 2. 3.
It's just an arbitrary limit which must be <2gb to be safe, and it's set to a stupidly low value, I've been meaning to change it to 1gb.
Ah, no, I was thinking of something else. httpd *is* trying to sendfile() the whole file in one go as the arguments show in the strace output. But the socket is non-blocking, of course, so this is expected behaviour; sendfile returns short each time it would block.