In the case of -C usage, if a -C is forgotten, yum will "helpfully" remove the cached expired metadata, and /then/ go looking for new stuff. If you don't have network connectivity this can suck a big one. Would it be possible to only remove the cached metadata if new content can be reached?
This is very related to the "don't eat the metadata if it's corrupt", and I've been looking into it on and off for a bit now. This is roughly what happens: 1. Yum seperately requests repomd.xml and primary.sqlite and maybe filelists.sqlite and updateinfo.xml 2. urlgrabber gets all the data needed, and then truncates the file with open(self.filename, 'wb'), if it's not in append mode. 3. urlgrabber hits the network and does it's thing. ...now failing as you suggest is only useful for repomd.xml, in the other cases we know the current data isn't valid with the current repomd.xml. So hacking urlgrabber itself to "fail correctly" is possible, but probably a bad idea ... and it doesn't solve the much more "normal" problem, where repomd.xml comes through fine, but the other files don't match it. Fixing this probably basically means trying to treat all of the above as an atomic unit, I'd initially looked at trying to download them all to temp. files and then mass rename if they all worked ... but that doesn't work out too well due to them all comming down at different times. So atm. I'm looking at keeping backups of a few versions of each, and then if we hit a problem trying to backout to a correct set ... but: 1. While this will be really nice, when it works ... it's not as easy as just removing an unlink. 2. No matter how much I've tested etc., I'm not dying to put this change in before the RHEL-5.2 rebase ...if you really care though, we/I can probably hack the repomd.xml backup/restore in with a tiny amount of code that I'd be happy with having in soon ... or you could just wait a release :)
This should hit soon after the 3.2.9 release today.