If up2date is interrupted or otherwise gets a corrupt ".hdr" file then
it gets permanent indigestion. Every further use of it blows up (with
diagnostics to console and no apparent info to the user - which is
also bad) when the gzip library fails to handle the .hdr file.
It ought I imagine to at least catch the gzip error and purge the
relevant cache entry, and preferably then re-fetch it and retry. Just
purging it, giving a sane error and letting the user rerun it would help.
Raised to security since it seems I can cause this simply by hijacking
DNS or being a hostile package supplier. If you think about the 'break
into mirror, install broken .hdr file and cripple every user of the
mirror' case its not pretty...
Uh oh... is Bug #121090 a case of this?
Bug #121090 does look like a dup of this.
The easy fix is probably to catch the exception and notify the user
of the problem -- *including* the filename that caused it to barf! --
rather than trying to figure out a safe way to delete a file in
response to an exception, or to try to recover silently.
Granted I've not run into this since somewhere around FC2-ish, but
then tracking Rawhide tends to develop automated avoidance routines
like periodically purging the .hdr cache manually.
Duplicated this with yum - still in FC3
need an example of a corrupt header, changes some
time in fc3 should eliminate getting stuck on gzip
errors with bogus headers (in up2date, no idea about yum)
Current versions should give an error about error loading
header and then automatically redownload it. RHEL4/FC3
should be do this.
If you see otherwise, get me a copy of the traceback
and the bogus header (or if need be, a tar of all of
*** Bug 121090 has been marked as a duplicate of this bug. ***