Bug 405201 - Don't remove cached metadata files until network connectivity is ensured
Don't remove cached metadata files until network connectivity is ensured
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: yum (Show other bugs)
rawhide
All Linux
low Severity low
: ---
: ---
Assigned To: James Antill
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-11-29 17:43 EST by Jesse Keating
Modified: 2014-01-21 18:00 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-01-24 14:55:38 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jesse Keating 2007-11-29 17:43:03 EST
In the case of -C usage, if a -C is forgotten, yum will "helpfully" remove the
cached expired metadata, and /then/ go looking for new stuff.  If you don't have
network connectivity this can suck a big one.  Would it be possible to only
remove the cached metadata if new content can be reached?
Comment 1 James Antill 2007-11-29 18:27:38 EST
 This is very related to the "don't eat the metadata if it's corrupt", and I've
been looking into it on and off for a bit now.
 This is roughly what happens:

1. Yum seperately requests repomd.xml and primary.sqlite and maybe
filelists.sqlite and updateinfo.xml
2. urlgrabber gets all the data needed, and then truncates the file with
open(self.filename, 'wb'), if it's not in append mode.
3. urlgrabber hits the network and does it's thing.

...now failing as you suggest is only useful for repomd.xml, in the other cases
we know the current data isn't valid with the current repomd.xml.
 So hacking urlgrabber itself to "fail correctly" is possible, but probably a
bad idea ... and it doesn't solve the much more "normal" problem, where
repomd.xml comes through fine, but the other files don't match it.

 Fixing this probably basically means trying to treat all of the above as an
atomic unit, I'd initially looked at trying to download them all to temp. files
and then mass rename if they all worked ... but that doesn't work out too well
due to them all comming down at different times.
 So atm. I'm looking at keeping backups of a few versions of each, and then if
we hit a problem trying to backout to a correct set ... but:

1. While this will be really nice, when it works ... it's not as easy as just
removing an unlink.
2. No matter how much I've tested etc., I'm not dying to put this change in
before the RHEL-5.2 rebase

...if you really care though, we/I can probably hack the repomd.xml
backup/restore in with a tiny amount of code that I'd be happy with having in
soon ... or you could just wait a release :)
Comment 2 James Antill 2008-01-24 14:55:38 EST
 This should hit soon after the 3.2.9 release today.

Note You need to log in before you can comment on or make changes to this bug.