Bug 177094 - _sqlite.OperationalError: database is locked
_sqlite.OperationalError: database is locked
Status: CLOSED NOTABUG
Product: Fedora
Classification: Fedora
Component: yum (Show other bugs)
4
All Linux
medium Severity medium
: ---
: ---
Assigned To: Jeremy Katz
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-01-06 02:39 EST by Ralf Corsepius
Modified: 2014-01-21 17:53 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-01-07 16:53:42 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ralf Corsepius 2006-01-06 02:39:15 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050923 Fedora/1.7.12-1.5.1

Description of problem:
# yum update
Setting up Update Process
Setting up repositories
...
Traceback (most recent call last):
  File "/usr/bin/yum", line 29, in ?
    yummain.main(sys.argv[1:])
  File "/usr/share/yum-cli/yummain.py", line 92, in main
    result, resultmsgs = do()
  File "/usr/share/yum-cli/cli.py", line 471, in doCommands
    return self.updatePkgs()
  File "/usr/share/yum-cli/cli.py", line 949, in updatePkgs
    self.doRepoSetup()
  File "/usr/share/yum-cli/cli.py", line 75, in doRepoSetup
    self.doSackSetup(thisrepo=thisrepo)
  File "__init__.py", line 260, in doSackSetup
  File "repos.py", line 287, in populateSack
  File "sqlitecache.py", line 96, in getPrimary
  File "sqlitecache.py", line 89, in _getbase
  File "sqlitecache.py", line 322, in updateSqliteCache
  File "/usr/src/build/539311-i386/install//usr/lib/python2.4/site-packages/sqlite/main.py", line 244, in execute
_sqlite.OperationalError: database is locked


Version-Release number of selected component (if applicable):
yum-2.4.1-1.fc4

How reproducible:
Didn't try


Additional info:
Comment 1 Seth Vidal 2006-01-06 02:43:05 EST
1. was there another yum session running?
2. your cache directory isn't on nfs, is it?
Comment 2 Ralf Corsepius 2006-01-06 02:58:41 EST
(In reply to comment #1)
> 1. was there another yum session running?
No. 

I am not sure what exactly had caused this, but I assume it had been caused by a
kernel crash having happened midst of a "yum update", when investigating PR176997.

cf. https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=176997#c2

> 2. your cache directory isn't on nfs, is it?
It is, but I am using it mutally exclusive between hosts (Only running one yum
at a time in the network sharing yum's cache).
Comment 3 Seth Vidal 2006-01-06 03:06:52 EST
1. okay - chances are the db is not locked but severely broken.

I'll work on a patch to catch the traceback but I don't think yum should do
anything other than gracefully tell you that something else has locked the db.

2. have you been successful in the past with the sqlite db files on nfs?
Comment 4 Ralf Corsepius 2006-01-06 03:42:34 EST
(In reply to comment #3)
> 1. okay - chances are the db is not locked but severely broken.
Well, ... removing the *.squite dbs helped.

> 2. have you been successful in the past with the sqlite db files on nfs?
To some extend yes. Concurrent yum's have never worked, but sharing
yum/*/packages did. Also, I have been experiencing unnecessary downloads of
metadata files.

In recent past, however the situation seems to have worsened, but I don't think
this problems actually are related to sharing caches, but to the newly added
timeout stuff interfering with mirror selection.

One of the problems: After a "yum update" had d/l'ed its metadata, and failed to
update the system because a repository is inconsistent or the set of repository
are inconsistent, yum continues to be unusable until the timeouts expire.

This typically happens when Extras or Livna introduce broken package
dependencies (i.e. almost every day) and when mirrors are subject to mass
updates. (This situation took place in the last couple of hours: openoffice was
pushed today, and geomview broke Extras yesterday)
Comment 5 Seth Vidal 2006-01-06 03:46:50 EST
with regard to 2. Then set metadata_timeout = 0 in your yum.conf under [main]
and or set it to some suitably low number (it is a time in seconds) and then the
timeout will expire quickly.

The metadata_timeout is just like any other cache, it can get stale or damaged
and the repository being inconsistent isn't really yum's fault.

so if you don't like the metadata_timeout or you think the value is too long,
you can adjust it.

This is even mentioned in the yum.conf man page.
Comment 6 Ralf Corsepius 2006-01-07 02:16:59 EST
(In reply to comment #5)
> with regard to 2. Then set metadata_timeout = 0
For completeness: It's metadata_expire ;)

> The metadata_timeout is just like any other cache, it can get stale or 
> damaged and the repository being inconsistent isn't really yum's fault.

I am facing several problems related to "metadata_expire", however so far I've
only investigated one:

For local repositories, the typical situation yum is being applied, is "having
built a package for use inside of my local network, push it to all machines on
my network ASAP.".

i.e. I probably want metadata_expire=0 for local repos.

The problem here is, the default behavior of yum having changed with the recent
yum update. Before, metadata_expire=0 had implicitly been used, now a global
default of 1800 is being used, causing quite some amount of confusion on my side.

Therefore, I'd prefer (and propose) that the global metadata_expire (yum.conf)
should be chosen to 0 (or be unset) and metadata_expire=<!=0> be set on a
per-repo basis in /etc/yum.repos.d/*.repo.

> This is even mentioned in the yum.conf man page.
I obviously missed this ;)
Comment 7 Seth Vidal 2006-01-07 09:12:47 EST
If you read the manpage you'll note that metadata_expire can be set per-repo.

just set it there.

Comment 8 Ralf Corsepius 2006-01-07 12:30:56 EST
(In reply to comment #7)
> just set it there.
That's what I did, but ...

The problem is: Your update changed the default being used, during the life time
of the distribution and broke existing installations.

That's something which should not have happened.

Comment 9 Seth Vidal 2006-01-07 16:53:42 EST
no the update added the feature.

it's a NEW feature and it hardly broke anything.

All it does is make a user wait AT MOST 30 minutes before being able to get
fresh metadata OR they can clean the metadata and reset the expiration time.

I'm closing this b/c what you're complaining about now is not a bug.
Comment 10 Ralf Corsepius 2006-01-07 21:30:30 EST
(In reply to comment #9)
> no the update added the feature.
Grrrr - The update changed the behavior.

> it's a NEW feature and it hardly broke anything.
It changed the behavior and broke the behavior in situation like mine.

Note You need to log in before you can comment on or make changes to this bug.