Red Hat Bugzilla – Bug 812486
deltarpm rebuild is slower than download speed
Last modified: 2013-07-31 21:06:51 EDT
Nowadays, 10Mbit+ speeds are quite usual, even for residential or mobile connections; in these situations, the deltarpm rebuild phase takes more time than downloading the full packages. For example, on my machine (an i5 with an SSD disk) deltarpm averages around 1.1MB/s, while the download speed I get from home is around 2MB/s.
This is even more noticeable on slower hardware on a faster connection; with an Atom netbook I am not able to go past 550KB/s when using deltarpm, while the network card is still capable of delivering 2MB/s.
I am not sure what could be a good solution to this problem; deltarpm is still valuable when the bandwidth is really limited, or when network traffic might be billed per-volume (e.g. with mobile connections, which are usually not this fast anyway). While the latter is easy to detect (e.g. NetworkManager knows this), the former might be trickier.
Here I'm working on a HP Z600 with 2x Hexa-Core Xeon with 24GB of RAM and 2x Samsung 840Pro in a RAID0 and having nearly 1 Gbit/s Internet connection.
Seeing my deltarpm's rebuilding at 1.1MB/s makes me wanna cry! Isn't there some way to speed this rebuild up or at least paralleling it? I havn't looked into how complicated the rebuild process is, but me feeling says that 1.1MB is a bit slow for such modern hardware. unrar on a single CPU is able to at least push out something like 40-50MB/s
The reason deltarpm is so heavy on CPU usage is that it has to rebuild the rpm exactly as it is, using the same compression as was originally used. This means that on slower hardware it takes a lot of work to recompress the rpm so the signatures match.
Sascha, not sure what version of Fedora you're running, but we did some work on making deltarpm download and rebuild in parallel for Fedora 18. I'd be interested to see what difference you see using yum-presto-0.9.
I just upgraded from yum-presto 0.7.3-1.fc17 to yum-presto-0.9.0-1.fc19, removed the latest kernel and did "yum clean", "yum update". But I'm not able to trigger an rpmdelta rebuild (I guess yum clean also removed the latest rpm to delta on). Is there a smart way to trigger or test the rpm rebuild?
Jonathan, I guess you have to extract the whole old RPM and apply the diff to the extracted versions of the file, as diff'ing an gzip'ed archive wouldn't be usefull at all.
But I don't get why you can't just "smartly" binary diff the cpio section of the RPM's, gzip this diff and on the client side just apply this diff on the fly. This can't be too expensive, right?
The "smartly diff" might be a bit tricky to implement and expensive, but the client should have rebuild speeds that are equal to a parallel "gunzip | bindiff | gzip".
The best way to test is to downgrade multiple rpms and then update again.
I may be missing something, but as far as I understand what you're saying in Comment 4, that's what deltarpm does (except it uses xz rather than gzip, which is very expensive for compression).
To clarify, on the server we:
1. Uncompress both old rpm and new rpm, and generate binary delta between them
2. Compress delta using whatever compression is used for the rpm.
On the client we:
1. Check what rpm is installed
2. Download delta between installed rpm and new rpm.
3. Grab uncompressed files from filesystem for installed rpm and apply delta to them, creating uncompressed new rpm
4. In sequence with #3, compress new rpm so signatures match
5. Install generated new rpm which is byte-for-byte identical to the original compressed new rpm.
Step #3 puts the most pressure on the filesystem, while step #4 is the most CPU intensive. The only change we've made in yum-presto-0.9 is that we now will attempt to download and build multiple deltarpms at one time, rather than one at a time. Deltarpm still doesn't do well at using multiple cores when building one rpm.
Thank you for the brilliant explanation. So yes I've forgotten, there was a switch to xz. Maybe it's just xz that is slowing down the process to that degree. I assumed that it's using gzip which should do much better on one core.
yum list kernel does only show the most recent version and
yum install kernel-3.6.6-1.fc17.x86_64
No Match for argument: kernelkernel-3.6.6-1.fc17.x86_64
I will just wait for the next updated to come into the repository and report how the deltarebuild does.
Thanks again for your time.
Jonathan, looks like it is faster with you fix:
<locally rebuilding delta 97% [==============-] 3.4 MB/s | 41 MB 00:00 ETA
It's still not as fast as downloading, but its a lot better than before.
This message is a reminder that Fedora 17 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 17. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora
'version' of '17'.
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version prior to Fedora 17's end of life.
Bug Reporter: Thank you for reporting this issue and we are sorry that
we may not be able to fix it before Fedora 17 is end of life. If you
would still like to see this bug fixed and are able to reproduce it
against a later version of Fedora, you are encouraged change the
'version' to a later Fedora version prior to Fedora 17's end of life.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Fedora 17 changed to end-of-life (EOL) status on 2013-07-30. Fedora 17 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version.
Thank you for reporting this bug and we are sorry it could not be fixed.