When /etc/passwd is huge (e.g. 100,000 users or more)*, tar extractions can be excruciatingly slow. This is because, although tar caches the last successful uid->username lookup or username->uid lookup, it does not cache nonexistent user names. Thus, if /etc/passwd is huge and you have a tar file containing many files owned by a user that is not on your system, it will take forever to extract. (it will scan through the entire /etc/passwd file for each file it extracts) The following patch makes tar extractions go about 100 times faster in these cases, by caching non-existent user names, group names, UIDs, and GIDs: http://www.engin.umich.edu/caen/systems/Linux/code/patches/tar-1.12-faster.patch (you should probably revert the "tar-1.12-namecache.patch" in your Red Hat 6.0 tar RPM before applying this patch) I have sent this patch to tar-bugs.mit.edu, but if it does not become part of the standard tar distribution, it would be a good idea if it or something similar was included in the next release of Red Hat Linux. Thank you very much, Chris Wing wingc.edu * - Yes, I know that Linux only supports 65,536 users standard. I fixed that, too: http://www.engin.umich.edu/caen/systems/Linux/highuids
Out of curiousity, does using nscd help at all?
I haven't tried using nscd. Does nscd do any good if you aren't using NIS? In any case, I heard back from the maintainers of 'tar'-- the latest version of GNU tar (1.13) contains a fix similar to mine. Perhaps you should upgrade tar to v1.13 in the next version of Red Hat Linux.
nscd might help some on huge passwd files; using db-based passwd/group files would help more. tar-1.13 will be in the next rawhide release.