Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1160480 - Http KeepAlive is disabled causing performance issues
Summary: Http KeepAlive is disabled causing performance issues
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: WebUI
Version: 6.0.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: Unspecified
Assignee: Mike McCune
QA Contact: Tazim Kolhar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-04 22:50 UTC by Mike McCune
Modified: 2023-09-14 02:50 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-08-12 14:01:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Satellite 6 CPU usage KeepAlive Off/On during Capsule Syncs (111.84 KB, image/png)
2015-01-13 21:29 UTC, Alex Krzos
no flags Details
Satellite 6 External Capsule CPU usage KeepAlive Off/On during Capsule Syncs (145.65 KB, image/png)
2015-01-13 21:29 UTC, Alex Krzos
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1330038 0 high CLOSED Http KeepAlive is disabled in 6.2 cause performance issues 2021-02-22 00:41:40 UTC

Internal Links: 1330038

Description Mike McCune 2014-11-04 22:50:57 UTC
Right now we are shipping Satellite 6 with the KeepAlive option disabled in Apache.

With this option enabled our UI is greatly sped up with some pages rendering in half the time, some examples:

"CLOSE": close the connection
"KA": KeepAlive is on

** Content Dashboard:
CLOSE: 9.0S
KA   : 7.2S

** Facts:
CLOSE: 1.0S
KA   : 0.2S

** Trends:
CLOSE: 0.6S
KA   : 0.2S

** Audits:
CLOSE: 0.6S
KA   : 0.3S

** Lifecycle Envs:
CLOSE: 1.5S
KA   : 0.3S

** Activation Keys
CLOSE: 3.5S
KA   : 1.6S

** All hosts:
CLOSE: 1.9S
KA   : 1.1S

** Content Views:
CLOSE: 4.0S
KA   : 1.6S

We need to update the installer to flip this option to On in the httpd configuration file.

Comment 5 Alex Krzos 2015-01-13 21:29:01 UTC
Created attachment 979757 [details]
Satellite 6 CPU usage KeepAlive Off/On during Capsule Syncs

Comment 6 Alex Krzos 2015-01-13 21:29:42 UTC
Created attachment 979758 [details]
Satellite 6 External Capsule CPU usage KeepAlive Off/On during Capsule Syncs

Comment 7 Alex Krzos 2015-01-13 21:30:20 UTC
In addition to the Web UI speed ups, KeepAlive reduces CPU usage on Satellite 6 and external Capsules.  I have attached two graphs showing the cpu usage of Satellite 6 and a single external capsule while Satellite 6 was syncing a single repository of 2048 RPMs to 20 capsules concurrently.

Comment 10 Tazim Kolhar 2015-05-21 06:52:13 UTC
hi

please provide verfication steps

thanks

Comment 11 Mike McCune 2015-05-26 14:17:58 UTC
TESTPLAN:

Verify that the KeepAlive is On in: /etc/httpd/conf/httpd.conf  

PidFile run/httpd.pid
Timeout 120
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 15

if this is Off then this bug fails, if KeepAlive is On, it passes.

Comment 12 Tazim Kolhar 2015-05-27 08:43:42 UTC
VERIFIED:
# rpm -qa | grep foreman
foreman-1.7.2.24-1.el7sat.noarch
ruby193-rubygem-foreman-tasks-0.6.12.5-1.el7sat.noarch
foreman-libvirt-1.7.2.24-1.el7sat.noarch
ruby193-rubygem-foreman_gutterball-0.0.1.9-1.el7sat.noarch
hp-sl2x170zg6-02.rhts.eng.bos.redhat.com-foreman-client-1.0-1.noarch
hp-sl2x170zg6-02.rhts.eng.bos.redhat.com-foreman-proxy-client-1.0-1.noarch
foreman-gce-1.7.2.24-1.el7sat.noarch
rubygem-hammer_cli_foreman-0.1.4.11-1.el7sat.noarch
foreman-selinux-1.7.2.13-1.el7sat.noarch
foreman-ovirt-1.7.2.24-1.el7sat.noarch
ruby193-rubygem-foreman-redhat_access-0.1.0-1.el7sat.noarch
rubygem-hammer_cli_foreman_tasks-0.0.3.4-1.el7sat.noarch
foreman-postgresql-1.7.2.24-1.el7sat.noarch
foreman-debug-1.7.2.24-1.el7sat.noarch
foreman-vmware-1.7.2.24-1.el7sat.noarch
ruby193-rubygem-foreman_hooks-0.3.7-2.el7sat.noarch
rubygem-hammer_cli_foreman_bootdisk-0.1.2.7-1.el7sat.noarch
rubygem-hammer_cli_foreman_docker-0.0.3.6-1.el7sat.noarch
foreman-proxy-1.7.2.4-1.el7sat.noarch
ruby193-rubygem-foreman_bootdisk-4.0.2.13-1.el7sat.noarch
hp-sl2x170zg6-02.rhts.eng.bos.redhat.com-foreman-proxy-1.0-2.noarch
ruby193-rubygem-foreman_docker-1.2.0.14-1.el7sat.noarch
rubygem-hammer_cli_foreman_discovery-0.0.1.10-1.el7sat.noarch
foreman-compute-1.7.2.24-1.el7sat.noarch
ruby193-rubygem-foreman_discovery-2.0.0.14-1.el7sat.noarch

steps:
Verify that the KeepAlive is On in: /etc/httpd/conf/httpd.conf  
ServerName "hp-sl2x170zg6-02.rhts.eng.bos.redhat.com"
ServerRoot "/etc/httpd"
PidFile run/httpd.pid
Timeout 120
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 15

KeepAlive is on

Comment 14 Bryan Kearney 2015-08-11 13:35:55 UTC
This bug is slated to be released with Satellite 6.1.

Comment 15 Bryan Kearney 2015-08-12 14:01:11 UTC
This bug was fixed in version 6.1.1 of Satellite which was released on 12 August, 2015.

Comment 16 Red Hat Bugzilla 2023-09-14 02:50:14 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.