Bug 566514 - yum is much slower than equivalent rpm operations
yum is much slower than equivalent rpm operations
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: yum (Show other bugs)
12
All Linux
low Severity medium
: ---
: ---
Assigned To: James Antill
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-02-18 12:12 EST by Michal Schmidt
Modified: 2014-01-21 18:13 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-09-29 17:46:15 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
measurement script (863 bytes, application/x-sh)
2010-02-18 12:12 EST, Michal Schmidt
no flags Details
graph of pread()s done by yum remove (76.37 KB, image/png)
2010-02-19 07:45 EST, Michal Schmidt
no flags Details
graph of pread()s done by bare-rpm-python-erase.py (296.08 KB, image/png)
2010-02-19 11:00 EST, Michal Schmidt
no flags Details
Perl script to generate pread() charts (1.76 KB, text/plain)
2010-02-19 11:05 EST, Michal Schmidt
no flags Details
updated measuring script (1.24 KB, application/x-sh)
2010-09-30 11:31 EDT, Michal Schmidt
no flags Details

  None (edit)
Description Michal Schmidt 2010-02-18 12:12:38 EST
Created attachment 394950 [details]
measurement script

Description of problem:
A comment under James Antill's blog post (http://illiterat.livejournal.com/7834.html?thread=23194#t23194) says:
  yum startup time for `yum localinstall' is terribly slow compared
  to simple rpm -ivh / rpm -Uvh. Similar problem with `rpm -e',
  compared to `yum remove'. It often takes less time to remove some
  package with `rpm -e' then it takes yum to start.

So I tested how much slower yum really is compared to rpm when installing/removing a single package. It turns out the commenter was right, yum is slower than rpm and especially with cold caches the difference is really terrible.

Version-Release number of selected component (if applicable):
rpm-4.7.2-1.fc12.x86_64
yum-3.2.25-1.fc12.noarch

How reproducible:
always

Steps to Reproduce:
Run the attached script, e.g. like this:
./bench-yum.sh /tmp/abrt-desktop-1.0.6-1.fc12.x86_64.rpm
(You can test with any non-installed package for which you already have all the dependencies installed. I specifically chose abrt-desktop because it's just a metapackage with no files, so its installation should be very quick.)
  
Actual results:
--------------------------------------------
Test 1: installation using plain rpm:
Measuring command: rpm -ivh /tmp/abrt-desktop-1.0.6-1.fc12.x86_64.rpm
0.54user 0.14system 0:04.91elapsed 14%CPU (0avgtext+0avgdata 33104maxresident)k
22928inputs+4040outputs (69major+3286minor)pagefaults 0swaps
--------------------------------------------
Test 2: removal using plain rpm:
Measuring command: rpm -ev abrt-desktop
0.45user 0.07system 0:02.98elapsed 17%CPU (0avgtext+0avgdata 31808maxresident)k
8832inputs+1720outputs (53major+2824minor)pagefaults 0swaps
--------------------------------------------
Test 3: installation using yum:
Measuring command: yum --disableplugin=* -y localinstall /tmp/abrt-desktop-1.0.6-1.fc12.x86_64.rpm
15.77user 5.24system 3:27.66elapsed 10%CPU (0avgtext+0avgdata 456576maxresident)k
490502inputs+9144outputs (126major+44023minor)pagefaults 0swaps
--------------------------------------------
Test 4: removal using yum:
Measuring command: yum --disableplugin=* -y remove abrt-desktop
14.68user 5.57system 2:33.36elapsed 13%CPU (0avgtext+0avgdata 273056maxresident)k
367502inputs+7712outputs (126major+31639minor)pagefaults 0swaps

So installation of the local package using yum is more than 40 times slower (wall time) than when using rpm. Removal of the package using yum is 50 times slower than using rpm. During the test the disk is busy all the time. Notice the huge difference in I/O counts.
No wonder people still use rpm -i ... and rpm -e ... instead of using yum for everything.

Expected results:
I'd expect yum to take no more than twice the time needed by rpm for the comparable operation.
Comment 1 James Antill 2010-02-18 12:22:45 EST
Try 3.2.26 from rawhide, which has the romdb caching code in it.

Although the above looks weird, if I'm reading it right ... is the yum removal time really 2.5 minutes? Is that repeatable? What is it doing?

From a cold cache install will be slower, because yum doesn't know that it "probably" doesn't need to init the repos. ... running with --disablerepo='*' would be fairer. We could maybe look at having yum do that, the hard bit is probably going to be turning them back on if deps. are required.
Also we do honour remote obsoletes atm. ... so that would break that.
Comment 2 Michal Schmidt 2010-02-18 12:35:42 EST
I am going to upgrade to F13 in a couple of days, then I'll rerun the test.

And yes, you're reading it right, it really is 2.5 minutes the test is nicely repeatable.
Looking at strace...  a lot of time is spent reading all of /var/lib/rpm/Packages in 4k chunks in an extremely stupid seeky pattern.
Comment 3 James Antill 2010-02-18 13:07:04 EST
Ahh, you meant a really cold cache ... not just "yum clean all". I'm somewhat surprised that rpm doesn't need to look at the same information, certainly being able to load it in order would be nice (there is a yum plugin to hack around this; basically does cat rpmdb > /dev/null).
 Panu can you think of a way we could load the pkgtup data inorder? Or in general what rpm does here so it doesn't have to?

Alas we need to load all the pkg data from the rpmdb, at least so we can get the caching breaker data. I've got a change for upstream though so we don't do that until after the prompt now, which is seems noticabley faster even with a warm cache (the full "yum -y remove" will be the same speed as 3.2.26 though).
Comment 4 seth vidal 2010-02-18 14:01:54 EST
Michal,
 Do me a favor and run your yum timing test like this:

yum -d3 -y remove ....  | grep  'time:'

and report the lines it kicks back. I'm curious where, individually, all the time is spent.
Comment 5 Michal Schmidt 2010-02-18 15:38:28 EST
Seth,

Here's the result:
# echo 3 > /proc/sys/vm/drop_caches; yum -d3 -y remove --disableplugin=\* abrt-desktop | grep 'time:'
Config time: 1.917
rpmdb time: 4.306
Depsolve time: 12.087
rpm_check_debug time: 0.133
Transaction Test time: 0.626
Transaction time: 78.273

While observing it I noticed that a piece of the process is unaccounted for. After "rpmdb time" was printed it took about a whole minute before "Depsolve time" was printed.
Comment 6 seth vidal 2010-02-18 15:51:27 EST
if you can capture the rest of the output there I'm curious what else it claims is going on.
Comment 7 James Antill 2010-02-18 15:52:23 EST
The minute between rpmdb and Depsolve is probably the thing I just changed:

http://yum.baseurl.org/gitweb?p=yum.git;a=commitdiff;h=47c944baed557d823a8f973a07463410d1e464bd

...as I said the above won't eliminate the time, but it'll move it to after y/n prompt and it'll probably be accounted for in the above stats. then.

 It's very interesting that the Transaction itself (almost all rpm code) takes roughly 30x longer than the full: rpm -e.
Comment 8 James Antill 2010-02-18 16:12:37 EST
 Note that if you pipe the yum -d 9 output into something like:

perl -ne 'BEGIN {$t = time} print time - $t, ": $_"'

...it'll give timestamps per. line.
Comment 9 Michal Schmidt 2010-02-18 16:59:26 EST
Here's a result with -d9 and timestamped lines using James's Perl script:

6: Not loading "blacklist" plugin, as it is disabled
7: Not loading "dellsysidplugin2" plugin, as it is disabled
8: Not loading "whiteout" plugin, as it is disabled
8: Config time: 2.138
8: Yum Version: 3.2.25
8: COMMAND: yum -d9 -y remove --disableplugin=* abrt-desktop 
8: Installroot: /
8: Ext Commands:
8: 
8:    abrt-desktop
8: Reading Local RPMDB
12: rpmdb time: 3.941
12: Setting up Remove Process
59: Resolving Dependencies
59: --> Running transaction check
59: ---> Package abrt-desktop.x86_64 0:1.0.6-1.fc12 set to be erased
59: Checking deps for abrt-desktop.x86_64 0-1.0.6-1.fc12 - e
71: --> Finished Dependency Resolution
71: Dependency Process ending
71: Depsolve time: 12.104
71: 
71: Dependencies Resolved
71: 
71: ================================================================================
71:  Package             Arch          Version               Repository        Size
71: ================================================================================
71: Removing:
71:  abrt-desktop        x86_64        1.0.6-1.fc12          installed         0.0 
71: 
71: Transaction Summary
71: ================================================================================
71: Remove        1 Package(s)
71: Reinstall     0 Package(s)
71: Downgrade     0 Package(s)
71: 
71: Downloading Packages:
71: Running rpm_check_debug
71: Member: abrt-desktop.x86_64 0-1.0.6-1.fc12 - e
71: Removing Package abrt-desktop-1.0.6-1.fc12.x86_64
71: rpm_check_debug time: 0.006
71: Running Transaction Test
71: Member: abrt-desktop.x86_64 0-1.0.6-1.fc12 - e
71: Removing Package abrt-desktop-1.0.6-1.fc12.x86_64
72: Finished Transaction Test
72: Transaction Test Succeeded
72: Transaction Test time: 0.528
72: Member: abrt-desktop.x86_64 0-1.0.6-1.fc12 - e
72: Removing Package abrt-desktop-1.0.6-1.fc12.x86_64
72: Running Transaction
148: 
  Erasing        : abrt-desktop-1.0.6-1.fc12.x86_64                         1/1 
148: Transaction time: 75.988
148: 
148: Removed:
148:   abrt-desktop.x86_64 0:1.0.6-1.fc12                                            
148: 
148: Complete!
Comment 10 Panu Matilainen 2010-02-19 07:09:38 EST
(In reply to comment #3)
> Ahh, you meant a really cold cache ... not just "yum clean all". I'm somewhat
> surprised that rpm doesn't need to look at the same information, certainly
> being able to load it in order would be nice (there is a yum plugin to hack
> around this; basically does cat rpmdb > /dev/null).
>  Panu can you think of a way we could load the pkgtup data inorder? Or in
> general what rpm does here so it doesn't have to?

Lets turn this around: why does yum need to load data for every single package in the rpmdb here?

> Alas we need to load all the pkg data from the rpmdb, at least so we can get
> the caching breaker data.

...and this doesn't really answer my question, as I haven't got the slightest what does this mean.

(In reply to comment #7)
>  It's very interesting that the Transaction itself (almost all rpm code) takes
> roughly 30x longer than the full: rpm -e.    

Just between the "almost all rpm code" Running Transcation to Complete, when installing a single trivial package (telnet-0.17-45.fc12.x86_64.rpm here), yum is among other things doing
- 12661 open() calls to yumdb paths
- 12610 getdents() calls
- 18967 fstat() calls
- 45943 pread() calls

For comparison, the same numbers for rpm itself from execve() to exit_group() (ie the entire operation, not just the transaction) are:
- 147 open() calls
- 4 getdents() calls
- 94 fstat() calls
- 28 pread() calls
Comment 11 Michal Schmidt 2010-02-19 07:45:08 EST
Created attachment 395090 [details]
graph of pread()s done by yum remove

For fun I used the strace output to draw the seeking pattern of yum accessing the files in /var/lib/rpm. It is dominated by reading /var/lib/rpm/Packages all over the place.

There a an empty space on the graph between 120 s - 215 s - it is spent by open()/getdents()/[f]stat() on a huge number of directories in /var/lib/yum/yumdb/.
Comment 12 James Antill 2010-02-19 07:55:39 EST
> Lets turn this around: why does yum need to load data for every single package
> in the rpmdb here?

 For install/update there are currently two places (that I know of) that need to load all packages:

1. Generating the updates data in rpmUtils.updates loads the pkgtups for the rpmdb and the pkgsacks. his is obviously horrible performance wise, but it'd be a complete re-write to change it ... and with obsoletes processing I'm not sure it'd be faster anyway.

2. The new rpmdb caching requires a "what state/version/whatever is the rpmdb" so we can know when we have to invalidate our cache (either because we screwed up, or someone altered the rpmdb from outside yum).
 We do: "yum version nogroups installed" (although you'd need to "yum clean rpmdb" to make sure it's doing it). You might think this is a little over the top, but it pretty much guarantees we don't introduce any problems due to the caching ... and we use the same "rpmdb version" for other features ... so it's "free" for varying definitions of that word :).

...for removals we don't need to do #1, but now we still need to do #2. To load the packages we do, basically:

         for hdr in mi:

...and, as said, that's pretty "seeky" ... I'm not sure you can do anything, but I thought I'd ask.

> Just between the "almost all rpm code" Running Transcation to Complete, when
> installing a single trivial package (telnet-0.17-45.fc12.x86_64.rpm here), yum
> is among other things doing
> - 12661 open() calls to yumdb paths

 Ahh, sorry, that's the yumdb init code's fault as it does:

        glb = '%s/*/*/' % self.conf.db_path
        pkgdirs = glob.glob(glb)

...to preload all the package paths. I thought we'd removed that. I think it's there for sync_with_rpmdb, but it could be done at that point instead of on init (feel free to just set "pkgdirs = []" if you want to test we haven't screwed something else up).
Comment 13 seth vidal 2010-02-19 09:29:51 EST
  okay using this script:
http://skvidal.fedorapeople.org/misc/bare-rpm-python-erase.py

which doesn't use yum at all - just rpm-python - I get the following results when 

I run these commands:
echo 3 > /proc/sys/vm/drop_caches
time strace -o /tmp/simple-py-erase.txt python bare-rpm-python-erase.py
real	0m38.326s
user	0m1.506s
sys	0m2.915s

373 open(
10507 pread 
4 getdents
221 fstat

but it took 38s to remove one pkg. (zsh)


any thoughts?
Comment 14 Michal Schmidt 2010-02-19 10:23:42 EST
Seth,
I see you're removing the 'zsh' package in your test. This package contains about 900 files, so it takes longer to remove than 'abrt-desktop' or 'telnet'.

38 seconds might be considered long, but this case should be discussed in a different BZ, because it does not demonstrate any difference between rpm and yum.
At least on my system 'bare-rpm-python-erase.py' takes about the same time as 'rpm -e zsh' (1m13s vs. 1m11s - it's a laptop with encrypted disk).
Comment 15 seth vidal 2010-02-19 10:32:38 EST
When I don't drop the cache first it takes 3s to remove zsh.

so there's 35s of _something_ there.

That's why I'm pointing it out.

I suspect that a good portion of that 35s is loading python.
Comment 16 Michal Schmidt 2010-02-19 10:47:06 EST
If I comment out the last line of your script:

#ts.run(f.callback, '')

... the script finishes in 4-5 seconds. That should be a closer estimate of the Python overhead.
Comment 17 Panu Matilainen 2010-02-19 10:49:16 EST
In reply to comment #13)
> but it took 38s to remove one pkg. (zsh) 
> 
> any thoughts?    

Yeah: its a lot and it sucks... The telnet test-case happened to be pretty much
a best case behavior as it doesn't trigger any basename-matches in the
database, whereas zsh (and almost any average package) causes rpm to rumble
through the db for basename matches for fingerprinting. OTOH the
telnet-testcase was mostly for pointing out yum overhead over rpm, which this
bug is about.

The difference between the "bare python" erase script and 'rpm -e' is not
measureable really, I get ~17s for both consistently.

There's a long saga about rpm's inefficient db reading in bug 536818. Haven't
yet really looked into it but it seems likely that some improvement could be
made just by tuning the db configuration.
Comment 18 Michal Schmidt 2010-02-19 11:00:46 EST
Created attachment 395132 [details]
graph of pread()s done by bare-rpm-python-erase.py

FWIW, here's a chart of pread() accesses into files under /var/lib/rpm from a bare-rpm-python-erase.py run. When 'rpm -e zsh' is used instead, the chart looks almost the same. Obviously rpm accesses /var/lib/rpm/Packages in an inefficient manner here.
Comment 19 Michal Schmidt 2010-02-19 11:05:55 EST
Created attachment 395135 [details]
Perl script to generate pread() charts

Here's the hacky Perl script I used to generate the charts. It expects the output of strace -tt in /tmp/yum.trace and it has many known deficiencies ;-)
Comment 20 James Antill 2010-02-19 15:05:32 EST
At the risk of having "he spent all his life removing seconds from other peoples" on my tombstone...

Ok, so this does appear to be all yum's fualt ... sorry, Panu!

Here are some stats from a local (pretty underpowered) VM, all stats. are taken after dropping the kernel cache.

rpm -e zziplib
  3.18s user 3.06s system 65% cpu 9.554 total

yum-3.2.25-7.fc13 -y rm zziplib
  18.44s user 5.72s system 26% cpu 1:32.19 total
  And it takes roughly 32s to get to the user prompt

yum-upstream rm zziplib -y
  14.63s user 5.56s system 25% cpu 1:19.58 total
  And it takes roughly 9s to get to the user prompt

yum-local rm zziplib -y
  10.97s user 4.04s system 25% cpu 58.867 total

yum-local-no-history rm zziplib -y
  5.35s user 1.70s system 24% cpu 29.161 total

...the difference between upstream and local ... is an in progress patch to "cache" the checksum data for all the packages. The difference between that and te last one, is that I turned history_record=false in yum.conf and also didn't do any of the rpmdb caching after the transaction (ie. no rpmdb version calc).
Comment 21 Jan Kratochvil 2010-07-28 16:06:39 EDT
The longest time of yum install/update is the final "hang" while strace shows
stat("/var/lib/yum/yumdb/...
open("/var/lib/yum/yumdb/...
83M	/var/lib/yum/yumdb/
I am curious why yum maintains its yumdb database as a directory structure when yum already uses sqlite (and rpm uses db4 etc.).
Comment 22 James Antill 2010-07-28 16:55:39 EDT
It's very possible that there are some optimizations available for yumdb read/write cases. If you have data, it'd be interesting. Note that 3.2.28 has some improvements (probably want to test before/after running: hardlink -c /var/lib/yum/yumdb).
3.2.28 also has some other optimizations to get it closer to rpm performance (esp. for remove), I might have time to do some comparisons again soon.

As for why not sqlite/etc. see:

http://yum.baseurl.org/wiki/YumDB
Comment 23 Jan Kratochvil 2010-07-28 17:46:59 EDT
(In reply to comment #22)
> If you have data, it'd be interesting.

i7-920 6GB RAM RAID1 disks.
sync; echo 3 > /proc/sys/vm/drop_caches
time yum -y install hardlink
real	2m7.581s

#!/usr/bin/stap
global last
probe syscall.open {
    filename = user_string ($filename)
    if (substr (filename, 0, 19) == "/var/lib/yum/yumdb/") {
        now = gettimeofday_s ()
	if (last != now) {
	    last = now
	    printf ("yumdb access %d\n", last);
	}
    }
}

shows that 75 before and 5 seconds after have been spent just on yumdb.
I already use metadata_expire=never and cron.daily yum makecache as it would took more minutes otherwise.

BTW rpm -i takes 0m3.191s.


> Note that 3.2.28 has some improvements

yum-3.2.27-4.fc13.noarch here.


> (probably want to test before/after running: hardlink -c /var/lib/yum/yumdb).

This is a workaround, you should open a yum Bug if it is required and not done automatically.


> http://yum.baseurl.org/wiki/YumDB    

There is no reason why not to use a database any webmaster can use.
Comment 24 Michal Schmidt 2010-07-29 09:57:39 EDT
(In reply to comment #22)
> It's very possible that there are some optimizations available for yumdb
> read/write cases. If you have data, it'd be interesting. Note that 3.2.28 has
> some improvements (probably want to test before/after running: hardlink -c
> /var/lib/yum/yumdb).
> 3.2.28 also has some other optimizations to get it closer to rpm performance
> (esp. for remove), I might have time to do some comparisons again soon.

I am running Rawhide and the latest yum from git (yum-3-2-27-270-gc9ff70d). It still reads all files /var/lib/yum/yumdb/*/*/checksum_data (in my case it is about 1700 files) at the end of the transaction (doing "yum remove telnet").
Comment 25 Penelope Fudd 2010-08-11 15:55:16 EDT
I'm running Fedora12, and just now I'm installing kdegames with yumex.  
After 5 minutes of updating various caches from the network, it determined that there are 4 packages to install:
  gnugo-3.8-2.fc12.i686.rpm (1.1 M)
  kdegames-4.4.5-1.fc12.i686.rpm (45 M)
  kdegames-libs-4.4.5-1.fc12.i686.rpm (1.1 M)
  kdegames-minimal-4.4.5-1.fc12.i686.rpm (14 M)
Downloading took a minute.
Yum is on the 'rpm_check_debug' step for *35 minutes* and counting.

I'm sorry, but yum is a steaming pile of inefficient code.

I want to go from typing 'yumex' to a fully populated, fast, and responsive screen in less than 10 seconds.  We have the technology (indexing and caching).  What we lack is the will.
Comment 26 seth vidal 2010-08-11 16:32:08 EDT
rpm_check_debug is what was outputted at the time?

I ask b/c that code is EXCLUSIVELY in the ts.check() routine.

If it is running for more than a second or two, then you most likely have a locked up rpmdb.

Kill all rpmdb-accessing processes and rebuild the rpmdb.

If you have the will to fix berkeley db lockups please do so.
Comment 27 Penelope Fudd 2010-08-11 16:43:33 EDT
Actually, I found that the output in yumex differs from the output in the terminal where yumex was started; it was actually on the transaction test.

And I found out why it was hanging; I had a stale sshfs mount in /mnt.  Yum was locked up tighter than a drum; lsof hung when looking at that process, only kill -9 got rid of it.  'umount -l /mnt/asdf' fixed it.

The startup speed of yum/yumex still isn't anything to crow about, but at least I can get things done.
Comment 28 seth vidal 2010-08-11 16:50:44 EDT
transaction test is ALSO in rpm.

it would happen if you ran yum, yumex, apt or rpm.

And the problem you had with a stale mount causing rpm to hang has:

1. been fixed in recent rpms
2. is entirely internal to rpm.

If you have specific startup speed concerns please post the output of:

yum -d3 yourcommand | grep 'time:'

that will help nail down specific problem areas.
Comment 29 James Antill 2010-09-29 17:46:15 EDT
I've fixed the "bugs" pointed out by comment #24 ... thanks Michal, etc. Upstream git has the code, as does rawhide (and my repos.f.o rebuilds of rawhide).

The nature of them was that we were dropping all of our caches at various points, just to be "safe" ... but with the introduction of yumdb/etc. that became very expensive, and nobody really tested doing small installs/removes (or cared that much about a few extra seconds, if they did).

We now have overhead of ~50% to 100% (depending on if you enable history or not) over rpm ... feel free to test yourselves. I updated the YumBenchmark wiki page.

As with all perf. stuff, we could always do better and it's possible we'll regress ... but this bug seems done now, so I'm going to close. As always anyone can open another bug, if they find another issue.
Comment 30 Michal Schmidt 2010-09-30 11:31:45 EDT
Created attachment 450794 [details]
updated measuring script

yum-3.2.28-8.fc15 is definitely a big improvement! The long wait at the end of the transaction is gone. Thank you James & Seth.

Here are some results I took with rpm-4.8.1-5.fc15.x86_64, yum-3.2.28-8.fc15. "history_record=no" was set in yum.conf. These results are not directly comparable with the ones I posted in the past, because they were from a different machine.

# sh bench-yum.sh telnet-0.17-47.fc14.x86_64.rpm 
--------------------------------------------
Test 1: installation using plain rpm:
rpm -i telnet-0.17-47.fc14.x86_64.rpm
Elapsed time: 6.65 s
--------------------------------------------
Test 2: removal using plain rpm:
rpm -e telnet
Elapsed time: 6.51 s
--------------------------------------------
Test 3: installation using yum; will take longer than usual because we changed rpmdb outside of yum:
yum -q --disableplugin=* --disablerepo=* -y --nogpg install telnet-0.17-47.fc14.x86_64.rpm
Elapsed time: 38.65 s
--------------------------------------------
Test 4: removal using yum:
yum -q --disableplugin=* --disablerepo=* -y remove telnet
Elapsed time: 11.58 s
--------------------------------------------
Test 5: again installation using yum:
yum -q --disableplugin=* --disablerepo=* -y --nogpg install telnet-0.17-47.fc14.x86_64.rpm
Elapsed time: 13.79 s
--------------------------------------------
Test 6: again removal using yum:
yum -q --disableplugin=* --disablerepo=* -y remove telnet
Elapsed time: 11.85 s

So for the simple (very few files, no scriptlets) package the overhead of yum vs. rpm was slightly more than 100 %. But I'd say it is acceptable.

Another test with an even simpler package (no files at all):

# sh bench-yum.sh abrt-desktop-1.1.13-2.fc15.x86_64.rpm 
--------------------------------------------
Test 1: installation using plain rpm:
rpm -i abrt-desktop-1.1.13-2.fc15.x86_64.rpm
Elapsed time: 2.20 s
--------------------------------------------
Test 2: removal using plain rpm:
rpm -e abrt-desktop
Elapsed time: 1.16 s
--------------------------------------------
Test 3: installation using yum; will take longer than usual because we changed rpmdb outside of yum:
yum -q --disableplugin=* --disablerepo=* -y --nogpg install abrt-desktop-1.1.13-2.fc15.x86_64.rpm
Elapsed time: 36.18 s
--------------------------------------------
Test 4: removal using yum:
yum -q --disableplugin=* --disablerepo=* -y remove abrt-desktop
Elapsed time: 10.56 s
--------------------------------------------
Test 5: again installation using yum:
yum -q --disableplugin=* --disablerepo=* -y --nogpg install abrt-desktop-1.1.13-2.fc15.x86_64.rpm
Elapsed time: 11.24 s
--------------------------------------------
Test 6: again removal using yum:
yum -q --disableplugin=* --disablerepo=* -y remove abrt-desktop
Elapsed time: 10.63 s

Plain rpm can install/remove such a package very quickly. Most of the time in the yum case seems to be spent on the seeky pread()ing of /var/lib/rpm/Packages which plain rpm somehow avoids here. Anyway, this is not a typical case.
I guess if rpm bug 536818 gets fixed, yum will benefit.

Note You need to log in before you can comment on or make changes to this bug.