Bug 177094
Summary: | _sqlite.OperationalError: database is locked | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Ralf Corsepius <rc040203> |
Component: | yum | Assignee: | Jeremy Katz <katzj> |
Status: | CLOSED NOTABUG | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4 | CC: | katzj |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2006-01-07 21:53:42 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Ralf Corsepius
2006-01-06 07:39:15 UTC
1. was there another yum session running? 2. your cache directory isn't on nfs, is it? (In reply to comment #1) > 1. was there another yum session running? No. I am not sure what exactly had caused this, but I assume it had been caused by a kernel crash having happened midst of a "yum update", when investigating PR176997. cf. https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=176997#c2 > 2. your cache directory isn't on nfs, is it? It is, but I am using it mutally exclusive between hosts (Only running one yum at a time in the network sharing yum's cache). 1. okay - chances are the db is not locked but severely broken. I'll work on a patch to catch the traceback but I don't think yum should do anything other than gracefully tell you that something else has locked the db. 2. have you been successful in the past with the sqlite db files on nfs? (In reply to comment #3) > 1. okay - chances are the db is not locked but severely broken. Well, ... removing the *.squite dbs helped. > 2. have you been successful in the past with the sqlite db files on nfs? To some extend yes. Concurrent yum's have never worked, but sharing yum/*/packages did. Also, I have been experiencing unnecessary downloads of metadata files. In recent past, however the situation seems to have worsened, but I don't think this problems actually are related to sharing caches, but to the newly added timeout stuff interfering with mirror selection. One of the problems: After a "yum update" had d/l'ed its metadata, and failed to update the system because a repository is inconsistent or the set of repository are inconsistent, yum continues to be unusable until the timeouts expire. This typically happens when Extras or Livna introduce broken package dependencies (i.e. almost every day) and when mirrors are subject to mass updates. (This situation took place in the last couple of hours: openoffice was pushed today, and geomview broke Extras yesterday) with regard to 2. Then set metadata_timeout = 0 in your yum.conf under [main] and or set it to some suitably low number (it is a time in seconds) and then the timeout will expire quickly. The metadata_timeout is just like any other cache, it can get stale or damaged and the repository being inconsistent isn't really yum's fault. so if you don't like the metadata_timeout or you think the value is too long, you can adjust it. This is even mentioned in the yum.conf man page. (In reply to comment #5) > with regard to 2. Then set metadata_timeout = 0 For completeness: It's metadata_expire ;) > The metadata_timeout is just like any other cache, it can get stale or > damaged and the repository being inconsistent isn't really yum's fault. I am facing several problems related to "metadata_expire", however so far I've only investigated one: For local repositories, the typical situation yum is being applied, is "having built a package for use inside of my local network, push it to all machines on my network ASAP.". i.e. I probably want metadata_expire=0 for local repos. The problem here is, the default behavior of yum having changed with the recent yum update. Before, metadata_expire=0 had implicitly been used, now a global default of 1800 is being used, causing quite some amount of confusion on my side. Therefore, I'd prefer (and propose) that the global metadata_expire (yum.conf) should be chosen to 0 (or be unset) and metadata_expire=<!=0> be set on a per-repo basis in /etc/yum.repos.d/*.repo. > This is even mentioned in the yum.conf man page. I obviously missed this ;) If you read the manpage you'll note that metadata_expire can be set per-repo. just set it there. (In reply to comment #7) > just set it there. That's what I did, but ... The problem is: Your update changed the default being used, during the life time of the distribution and broke existing installations. That's something which should not have happened. no the update added the feature. it's a NEW feature and it hardly broke anything. All it does is make a user wait AT MOST 30 minutes before being able to get fresh metadata OR they can clean the metadata and reset the expiration time. I'm closing this b/c what you're complaining about now is not a bug. (In reply to comment #9) > no the update added the feature. Grrrr - The update changed the behavior. > it's a NEW feature and it hardly broke anything. It changed the behavior and broke the behavior in situation like mine. |