Bug 494510 - Yum repodata generation performance improvements
Summary: Yum repodata generation performance improvements
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Satellite 5
Classification: Red Hat
Component: Server
Version: 530
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Justin Sherrill
QA Contact: Martin Minar
URL:
Whiteboard:
: 506548 572624 (view as bug list)
Depends On:
Blocks: sat540-blockers 529061
TreeView+ depends on / blocked
 
Reported: 2009-04-07 09:00 UTC by Jan Pazdziora (Red Hat)
Modified: 2018-11-14 20:30 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 529061 (view as bug list)
Environment:
Last Closed: 2010-10-28 14:53:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
test script evaluating query execution and fetch time (3.16 KB, application/octet-stream)
2009-04-16 22:50 UTC, Pradeep Kilambi
no flags Details

Description Jan Pazdziora (Red Hat) 2009-04-07 09:00:49 UTC
Description of problem:

I've installed Satellite. I've synced rhel-i386-server-5 and rhn-tools-rhel-i386-server-5 channels. I've registered RHEL 5 system to the Satellite. I've run yum install nfs-utils and I get

Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhel-i386-server-5. Please verify its path and try again

Version-Release number of selected component (if applicable):

Satellite-5.3.0-RHEL5-re20090403.2 on i386

How reproducible:

I cannot get rid of the error.

Steps to Reproduce:
1. Install Satellite.
2. Sync channels.
3. Register client.
4. Run yum install nfs-install.
  
Actual results:

# yum install nfs-utils
Loaded plugins: rhnplugin, security
Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhel-i386-server-5. Please verify its path and try again

Expected results:

No error, packages listed.

Additional info:

I understand that the repomd is now generated by taskomatic. Maybe it's still running, I don't know, but I've been trying this for at least 15 minutes, so if the computation is still running, there is some performance regression going on.

These files are still growing:

# ls -la /var/cache/rhn/repodata/rhel-i386-server-5/
total 29016
drwxr-xr-x 2 root root     4096 Apr  7 10:40 .
drwxr-xr-x 3 root root     4096 Apr  7 10:40 ..
-rw-r--r-- 1 root root  9615566 Apr  7 10:55 filelists.xml.gz.new
-rw-r--r-- 1 root root 18579704 Apr  7 10:55 other.xml.gz.new
-rw-r--r-- 1 root root  1436046 Apr  7 10:55 primary.xml.gz.new

# ls -la /var/cache/rhn/repodata/rhel-i386-server-5/
total 33284
drwxr-xr-x 2 root root     4096 Apr  7 10:40 .
drwxr-xr-x 3 root root     4096 Apr  7 10:40 ..
-rw-r--r-- 1 root root 11188970 Apr  7 10:57 filelists.xml.gz.new
-rw-r--r-- 1 root root 21230532 Apr  7 10:57 other.xml.gz.new
-rw-r--r-- 1 root root  1580506 Apr  7 10:57 primary.xml.gz.new

And there is a java process running:

top - 10:56:42 up 19:46,  3 users,  load average: 3.81, 3.86, 4.14
Tasks: 151 total,   1 running, 150 sleeping,   0 stopped,   0 zombie
Cpu(s): 66.7%us, 21.5%sy,  0.0%ni,  0.0%id,  6.9%wa,  0.3%hi,  4.6%si,  0.0%st
Mem:    933644k total,   921944k used,    11700k free,     1688k buffers
Swap:  1020116k total,   481380k used,   538736k free,   356624k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                            
 2263 root      20   0  291m 234m 5652 S 80.2 25.7  11:18.33 java                                                                                               
 2415 oracle    15   0  355m 186m 186m S 12.6 20.5   1:34.65 oracle                                                                                             
 1465 oracle    16   0  353m  15m  15m S  0.3  1.7   0:00.52 oracle                                                                                             
 2691 root      15   0  2324 1056  800 R  0.3  0.1   0:00.10 top                                                                                                
    1 root      15   0  2064  488  464 S  0.0  0.1   0:02.97 init                                                                                               
    2 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 migration/0                                                                                        

Yet after 15 minutes I still am not able to get the repomd.xml.

Comment 8 Pradeep Kilambi 2009-04-16 22:50:11 UTC
Created attachment 339935 [details]
test script evaluating query execution and fetch time

Comment 15 Clifford Perry 2009-06-05 21:06:27 UTC
Performance of Java vs Python code after review seems comparable. Initial reviews indicate no further improvements to be gained. 

Something I would love to propose and suggest for review post 530 and for possible future - is - can we somehow do diff/delta's so:

2000 packages in channel - initially takes 20 minutes to complete. 

Add 200 more packages in channel - now takes 22 min to refresh the channels contents. Wouldn't it be cool if we were able to look at old repomd cache, determine the exact 200 packages missing, manipulate the xml insert new content and then write it back to disk. So X time to figure out whats different, 2 min to generate data for 200 packages, plus time to write it out live. 

So. For now, moving off 530, lets see if we can revisit repomd generation in future and see if we can make it better going forward. To my understanding running createrepo on a large amount of packages takes equally as long, so some clever logic to delta/diff and just add new stuff would be nice here as part of repo management.

Cliff

Comment 16 Pradeep Kilambi 2009-06-17 17:46:47 UTC
*** Bug 506548 has been marked as a duplicate of this bug. ***

Comment 17 Jan Hutař 2009-07-23 07:43:26 UTC
Hello,
I have created RHTS test which tries to measure this. Actually it does:

  1. stop satellite
  2. remove satellite and yum cache
  3. start satellite
  4. run `yum list available` repeatedly until we get some result

And on a same HW (hp-bl260cg5-01.rhts.bos.redhat.com) I got:

Sat520:                350 seconds
   -> http://rhts.redhat.com/cgi-bin/rhts/jobs.cgi?id=75439
Sat530 (20090716.0):   625 seconds
   -> http://rhts.redhat.com/cgi-bin/rhts/jobs.cgi?id=74891

Test still have some issues and I want to repeat Measure phase multiple times to get better results, but this is what I have now.

/CoreOS/RHN-Satellite/Inter-Satellite-Sync/Regression/bz494510-repodata-generation-speed/
http://cvs.devel.redhat.com/cgi-bin/cvsweb.cgi/tests/RHN-Satellite/Inter-Satellite-Sync/Regression/bz494510-repodata-generation-speed/

Will try to fix the test and come back with more results.

Comment 27 Chris Lumens 2010-03-12 16:05:07 UTC
*** Bug 572624 has been marked as a duplicate of this bug. ***

Comment 34 Garik Khachikyan 2010-10-18 11:33:07 UTC
updated QA Whiteboard with prepared: RHTS test (by mminar)
---
/CoreOS/RHN-Satellite/Inter-Satellite-Sync/Sanity/repodata-performance

Comment 35 Garik Khachikyan 2010-10-18 12:20:51 UTC
# VERIFIED against errata.stage 
(signed packages -satellite-5.4.0-20101015-rhel-5)

Scenario:
---
Checked against 8 core system with 8GB RAM.
Repo data creation of cloned RHEL5 x86_64 channel with 10,102 packages.

stat -c %z /var/cache/rhn/repodata/clone-rhel-x86_64-server-5/
2010-10-18 08:07:35.000000000 -0400 # right on taskomatic start

stat -c %z /var/cache/rhn/repodata/clone-rhel-x86_64-server-5/repomd.xml 
2010-10-18 08:13:50.000000000 -0400 # on having the last file created: repomd.xml
---

So, for 10,102 packages - 6min 15sec = 375sec

Comment 36 Clifford Perry 2010-10-28 14:49:00 UTC
The 5.4.0 RHN Satellite and RHN Proxy release has occurred. This issue has been resolved with this release. 


RHEA-2010:0801 - RHN Satellite Server 5.4.0 Upgrade
https://rhn.redhat.com/rhn/errata/details/Details.do?eid=10332

RHEA-2010:0803 - RHN Tools enhancement update
https://rhn.redhat.com/rhn/errata/details/Details.do?eid=10333

RHEA-2010:0802 - RHN Proxy Server 5.4.0 bug fix update
https://rhn.redhat.com/rhn/errata/details/Details.do?eid=10334

RHEA-2010:0800 - RHN Satellite Server 5.4.0
https://rhn.redhat.com/rhn/errata/details/Details.do?eid=10335

Docs are available:

http://docs.redhat.com/docs/en-US/Red_Hat_Network_Satellite/index.html 

Regards,
Clifford


Note You need to log in before you can comment on or make changes to this bug.