Bug 405201 - Don't remove cached metadata files until network connectivity is ensured
Summary: Don't remove cached metadata files until network connectivity is ensured
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: yum
Version: rawhide
Hardware: All
OS: Linux
low
low
Target Milestone: ---
Assignee: James Antill
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-11-29 22:43 UTC by Jesse Keating
Modified: 2014-01-21 23:00 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2008-01-24 19:55:38 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Jesse Keating 2007-11-29 22:43:03 UTC
In the case of -C usage, if a -C is forgotten, yum will "helpfully" remove the
cached expired metadata, and /then/ go looking for new stuff.  If you don't have
network connectivity this can suck a big one.  Would it be possible to only
remove the cached metadata if new content can be reached?

Comment 1 James Antill 2007-11-29 23:27:38 UTC
 This is very related to the "don't eat the metadata if it's corrupt", and I've
been looking into it on and off for a bit now.
 This is roughly what happens:

1. Yum seperately requests repomd.xml and primary.sqlite and maybe
filelists.sqlite and updateinfo.xml
2. urlgrabber gets all the data needed, and then truncates the file with
open(self.filename, 'wb'), if it's not in append mode.
3. urlgrabber hits the network and does it's thing.

...now failing as you suggest is only useful for repomd.xml, in the other cases
we know the current data isn't valid with the current repomd.xml.
 So hacking urlgrabber itself to "fail correctly" is possible, but probably a
bad idea ... and it doesn't solve the much more "normal" problem, where
repomd.xml comes through fine, but the other files don't match it.

 Fixing this probably basically means trying to treat all of the above as an
atomic unit, I'd initially looked at trying to download them all to temp. files
and then mass rename if they all worked ... but that doesn't work out too well
due to them all comming down at different times.
 So atm. I'm looking at keeping backups of a few versions of each, and then if
we hit a problem trying to backout to a correct set ... but:

1. While this will be really nice, when it works ... it's not as easy as just
removing an unlink.
2. No matter how much I've tested etc., I'm not dying to put this change in
before the RHEL-5.2 rebase

...if you really care though, we/I can probably hack the repomd.xml
backup/restore in with a tiny amount of code that I'd be happy with having in
soon ... or you could just wait a release :)


Comment 2 James Antill 2008-01-24 19:55:38 UTC
 This should hit soon after the 3.2.9 release today.



Note You need to log in before you can comment on or make changes to this bug.