When /etc/passwd is huge (e.g. 100,000 users or more)*, tar
extractions can be excruciatingly slow. This is because,
although tar caches the last successful uid->username lookup
or username->uid lookup, it does not cache nonexistent user
Thus, if /etc/passwd is huge and you have a tar file
containing many files owned by a user that is not on your
system, it will take forever to extract. (it will scan
through the entire /etc/passwd file for each file it
The following patch makes tar extractions go about 100 times
faster in these cases, by caching non-existent user names,
group names, UIDs, and GIDs:
(you should probably revert the "tar-1.12-namecache.patch"
in your Red Hat 6.0 tar RPM before applying this patch)
I have sent this patch to email@example.com, but if it
does not become part of the standard tar distribution, it
would be a good idea if it or something similar was included
in the next release of Red Hat Linux.
Thank you very much,
* - Yes, I know that Linux only supports 65,536 users
standard. I fixed that, too:
Out of curiousity, does using nscd help at all?
I haven't tried using nscd. Does nscd do any good if you aren't using
In any case, I heard back from the maintainers of 'tar'-- the latest
version of GNU tar (1.13) contains a fix similar to mine.
Perhaps you should upgrade tar to v1.13 in the next version of Red Hat
nscd might help some on huge passwd files; using db-based
passwd/group files would help more.
tar-1.13 will be in the next rawhide release.