Cloning for Satellite 5.3.0.
+++ This bug was initially created as a clone of Bug #472595 +++
Kickstarts from Spacewalk take roughly 2-3 times as long as they should and the load on the Spacewalk server is around 4-5 on rlx-0-04.rhndev.
The slowness starts as the machine is downloading packages from the Spacewalk box. Watch top and you can see httpd and oracle taking up most of the resources on the box.
--- Additional comment from firstname.lastname@example.org on 2009-01-09 16:52:56 EDT ---
I think this may have just been a temporary problem, as the issue has gone away. Moving to modified just to be tested.
--- Additional comment from email@example.com on 2009-01-28 17:54:43 EDT ---
Customers are still reporting this in 0.4. Re-opening.
--- Additional comment from firstname.lastname@example.org on 2009-02-03 18:16:08 EDT ---
Short term workaround for this bug:
* Edit: /etc/httpd/conf.d/zz-spacewalk-server.conf
Add: EnableMMAP off, EnableSendfile off to the Directory stanza:
* Add this index to your database:
# sqlplus spacewalk/spacewalk@xe
SQL> CREATE INDEX rhn_package_path_idx
ON rhnPackage(id, path);
--- Additional comment from email@example.com on 2009-02-24 16:06:43 EDT ---
*** This bug has been marked as a duplicate of 470234 ***
--- Additional comment from firstname.lastname@example.org on 2009-02-24 17:32:07 EDT ---
*** Bug 470234 has been marked as a duplicate of this bug. ***
--- Additional comment from email@example.com on 2009-03-12 01:49:12 EDT ---
i have the java port of the kickstart file downloader 90% complete but it is failing a few of my testcases. Might not make 0.5 but I may release packages right after 0.5 goes out.
--- Additional comment from firstname.lastname@example.org on 2009-03-23 17:19:51 EDT ---
--- Additional comment from email@example.com on 2009-03-24 06:56:43 EDT ---
Greetings Mike, I think you forgot to create rhn_package_path_idx index
in the schema itself; right now the index is present in the sql upgrade
script (153-rhnPackage-pathidx.sql) only.
--- Additional comment from firstname.lastname@example.org on 2009-03-24 09:08:18 EDT ---
FAILS_QA, see comment #8
--- Additional comment from email@example.com on 2009-03-25 13:17:20 EDT ---
--- Additional comment from firstname.lastname@example.org on 2009-03-25 13:28:01 EDT ---
schema fix: 7b974a41d4ec7f7ea0bc02257ee5d90941b71ccb
--- Additional comment from email@example.com on 2009-03-25 16:49:11 EDT ---
This actually should still be in FAILS_QA. Discovered some fatal errors while kickstarting systems so it should not be tested.
--- Additional comment from firstname.lastname@example.org on 2009-04-14 10:11:55 EDT ---
Spacewalk 0.5 released.
--- Additional comment from email@example.com on 2009-09-17 03:08:39 EDT ---
Spacewalk 0.5 has been released for long time ago.
Cloning this for Satellite 5.3.0 to make sure the fix s ported to Satellite (5.3.1 ideally) as same issue was reported at a customer's, and the config change above resolved it.
Hi I am experiencing the same issue as above, only the problem has simply gone away. Are these problems experienced with embedded or standalone or both? I am currently getting them on standalone(external).
I was having problems with cloning a channel. 4+ hours to clone 8117 packages on Satellite 5.3. After modifying the /etc/httpd/conf.d/zz-spacewalk-server.conf and restarting httpd. We cloned the exact same channel in about 10 seconds. HUGE difference.
Taking this bugzilla.
So, is this bugzilla about kickstarting performance or about clonning? The original bug 472595 is not about cloning at all. However, the comment 2 and comment 4 talk about cloning, not about kickstarts. And comment 3 is not clear about what the issue actually way.
Update to this bug.
The issues experienced on the field have specifically been around the following:
Slowness on front end and resluting in time out on FRONT-END of Satellite 5.3, when initiating system-group creation and or channel cloning.
The fix related to making changes in /etc/httpd/conf.d/zz-spacewalk-server.conf and restarting httpd appear to make the issue go away.
The changes are here :
I just tried that having
does not have any impact on the speed of channel cloning. I've also tried that the speed of channel cloning on 5.3.0 is comparable to 5.2.0 (tested on rlx-1-*).
Per communication on the mailing list, moving from sat531-blockers back to sat531-triage.
Please re-open if needed.
I don't believe this should be reopened (for reasons stated at the end), but wanted to share some more information for anybody that hits this in the future.
At a customer site, we had /var/satellite hosted on SAN, and /rhnsat as local storage. We tried to clone RHEL5-64bit, and every clone would cause an oracle process to peg the CPU and we let it sit for a while >30min (the browser would eventually time-out). Bouncing tomcat or all satellite services had no effect. Eventually a channel was cloned (I do not know how long it took, we left for the night).
The next day I added the follow params to zz-spacewalk-www.conf to the /var/www/html directive.
And bounced all satellite services just to make sure. And now cloning rhel5 takes under a minute.
I'm suspecting the variable that is used by customers, but not by developers is the remote storage (nfs / SAN / etc). And I realize we specifically state /rhnsat should be on local storage, but the docs are not clear as to where /var/satellite should live. If they are, I couldn't find it.