Red Hat Bugzilla – Bug 495816
yum metadata generation in /var/cache/rhn can cause extreme server load
Last modified: 2012-12-24 09:43:48 EST
Cloning for sat51maint
+++ This bug was initially created as a clone of Bug #495814 +++
If the cache for a given channel needs to get regenerated in /var/cache/rhn every client request for that new metadata kicks of a process to regenerate the files.
This can cause extreme load on the Satellite server and each thread is essentially doing the same thing.
In order to reproduce this issue I wrote a simple multi-threaded python utility to spawn multiple yum requests to a RHN Satellite server. This client spins up 10 threads each doing:
yum clean all && yum search zsh
with separate --installroot parameters to allow simultaneous execution.
After setting up 2 RHEL5 clients each with my load simulator I was
quickly able to get my Satellite to reach a load of *40-80* with it
eventually ceasing to be accessible.
** Steps to reproduce the yum 'metadata storm' on a 5.2 Satellite:
1) Register at least 2 RHEL5 clients to your Satellite
2) Make sure your RHEL5 channel is populated and synced
3) Check out:
4) On each RHEL5 client as root execute: 'python yum-load-test.py'
5) On your RHN Satellite server run: 'rm -rf /var/cache/rhn/'
6) wait .. This will cause each client request to start re-generation of
the metadata for the rhel5 channel. As these requests pile up the
server is quickly brought to its knees.
The more clients you have the quicker it will die.
bug 495814 for sat52maint
bug 495816 for sat51maint
bug 495815 for sat530-triage
As per EOL Errata:
This is the End Of Life notification for RHN Satellite Server 5 versions
released to run on Red Hat Enterprise Linux 4.
On December 1st, 2012, per the life-cycle support policy, the following
versions of Satellite and Proxy products, released on Red Hat Enterprise
Linux 4, exited Production Phase 2 marking the end of their support by
RHN Satellite & RHN Proxy:
- 5.2 on Red Hat Enterprise Linux 4
- 5.3 on Red Hat Enterprise Linux 4
I am closing out this specific bug as CLOSED, since it was tracking a product version, which is now EOL.