Red Hat Bugzilla – Bug 177094
_sqlite.OperationalError: database is locked
Last modified: 2014-01-21 17:53:27 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050923 Fedora/1.7.12-1.5.1
Description of problem:
# yum update
Setting up Update Process
Setting up repositories
Traceback (most recent call last):
File "/usr/bin/yum", line 29, in ?
File "/usr/share/yum-cli/yummain.py", line 92, in main
result, resultmsgs = do()
File "/usr/share/yum-cli/cli.py", line 471, in doCommands
File "/usr/share/yum-cli/cli.py", line 949, in updatePkgs
File "/usr/share/yum-cli/cli.py", line 75, in doRepoSetup
File "__init__.py", line 260, in doSackSetup
File "repos.py", line 287, in populateSack
File "sqlitecache.py", line 96, in getPrimary
File "sqlitecache.py", line 89, in _getbase
File "sqlitecache.py", line 322, in updateSqliteCache
File "/usr/src/build/539311-i386/install//usr/lib/python2.4/site-packages/sqlite/main.py", line 244, in execute
_sqlite.OperationalError: database is locked
Version-Release number of selected component (if applicable):
1. was there another yum session running?
2. your cache directory isn't on nfs, is it?
(In reply to comment #1)
> 1. was there another yum session running?
I am not sure what exactly had caused this, but I assume it had been caused by a
kernel crash having happened midst of a "yum update", when investigating PR176997.
> 2. your cache directory isn't on nfs, is it?
It is, but I am using it mutally exclusive between hosts (Only running one yum
at a time in the network sharing yum's cache).
1. okay - chances are the db is not locked but severely broken.
I'll work on a patch to catch the traceback but I don't think yum should do
anything other than gracefully tell you that something else has locked the db.
2. have you been successful in the past with the sqlite db files on nfs?
(In reply to comment #3)
> 1. okay - chances are the db is not locked but severely broken.
Well, ... removing the *.squite dbs helped.
> 2. have you been successful in the past with the sqlite db files on nfs?
To some extend yes. Concurrent yum's have never worked, but sharing
yum/*/packages did. Also, I have been experiencing unnecessary downloads of
In recent past, however the situation seems to have worsened, but I don't think
this problems actually are related to sharing caches, but to the newly added
timeout stuff interfering with mirror selection.
One of the problems: After a "yum update" had d/l'ed its metadata, and failed to
update the system because a repository is inconsistent or the set of repository
are inconsistent, yum continues to be unusable until the timeouts expire.
This typically happens when Extras or Livna introduce broken package
dependencies (i.e. almost every day) and when mirrors are subject to mass
updates. (This situation took place in the last couple of hours: openoffice was
pushed today, and geomview broke Extras yesterday)
with regard to 2. Then set metadata_timeout = 0 in your yum.conf under [main]
and or set it to some suitably low number (it is a time in seconds) and then the
timeout will expire quickly.
The metadata_timeout is just like any other cache, it can get stale or damaged
and the repository being inconsistent isn't really yum's fault.
so if you don't like the metadata_timeout or you think the value is too long,
you can adjust it.
This is even mentioned in the yum.conf man page.
(In reply to comment #5)
> with regard to 2. Then set metadata_timeout = 0
For completeness: It's metadata_expire ;)
> The metadata_timeout is just like any other cache, it can get stale or
> damaged and the repository being inconsistent isn't really yum's fault.
I am facing several problems related to "metadata_expire", however so far I've
only investigated one:
For local repositories, the typical situation yum is being applied, is "having
built a package for use inside of my local network, push it to all machines on
my network ASAP.".
i.e. I probably want metadata_expire=0 for local repos.
The problem here is, the default behavior of yum having changed with the recent
yum update. Before, metadata_expire=0 had implicitly been used, now a global
default of 1800 is being used, causing quite some amount of confusion on my side.
Therefore, I'd prefer (and propose) that the global metadata_expire (yum.conf)
should be chosen to 0 (or be unset) and metadata_expire=<!=0> be set on a
per-repo basis in /etc/yum.repos.d/*.repo.
> This is even mentioned in the yum.conf man page.
I obviously missed this ;)
If you read the manpage you'll note that metadata_expire can be set per-repo.
just set it there.
(In reply to comment #7)
> just set it there.
That's what I did, but ...
The problem is: Your update changed the default being used, during the life time
of the distribution and broke existing installations.
That's something which should not have happened.
no the update added the feature.
it's a NEW feature and it hardly broke anything.
All it does is make a user wait AT MOST 30 minutes before being able to get
fresh metadata OR they can clean the metadata and reset the expiration time.
I'm closing this b/c what you're complaining about now is not a bug.
(In reply to comment #9)
> no the update added the feature.
Grrrr - The update changed the behavior.
> it's a NEW feature and it hardly broke anything.
It changed the behavior and broke the behavior in situation like mine.