Red Hat Bugzilla – Bug 811544
yum and rpm very slow on some SSDs
Last modified: 2014-01-21 18:21:52 EST
yum was taking a number of hours to perform some simple updates and installations so I tried rebuilding the DB etc. without anything solving the problem. When I realized that the slowness was mostly when "Running Transaction Check" and "Running Transaction Test" were displayed I realized that this is the time that yum is writing to disk.
I have a 180G Corsair Force GT SSD and decided to focus on optimizing that to see if I could solve the problem. It was only when I disabled swap and the scheduler that the problem was solved. I did this by creating an executable /etc/rc.d/rc.local with the following contents:
echo 1 > /proc/sys/vm/swappiness
echo noop > /sys/block/sda/queue/scheduler
Maybe this is something to do with the scheduler? Interestingly, all other applications were working very quickly so this appears to be related to something that yum / rpm does when the disk is accessed.
Version-Release number of selected component (if applicable):
Loaded plugins: fastestmirror, langpacks, presto, refresh-packagekit
Installed: 16/i386 1736:48961917980528609eb5cbfa06d1ceef2d3423c2
Steps to Reproduce:
1. Install Fedora 16 onto a Corsair Force GT SSD
2. I think this happened after the first yum update
3. Try to install or update any package
yum appears to hang when "Running Transaction Check" is displayed ... when it finally reaches "Running Transaction Test" it is just as slow.
If the extreme slowness occurred on fresh install where no updates had yet been applied, it's likely to be related to bug 752897, sort of addressed in this update:
For many users that update has made a world of difference, on my own systems I dont really notice anything at all. Where exactly this difference in behavior comes from is somewhere in the kernel (and hardware) land: in my tests (see bug 752897) a dumb test-case went from 2m 39s to 7m 53s between kernel 2.6.39 and 3.2.3. Given how much stuff changes in every kernel version, I've no idea what might have caused it. Might as well be changes in io scheduler (on some hardware or .. something)
Not wanting to undo my fixes and seeing that you already have a patch that would probably have fixed my problem I may as well close this bug.
*** This bug has been marked as a duplicate of bug 752897 ***